Design and Implementation of Generics for the.NET Common Language Runtime
ISO 15378(2011版)培训课件-1
Important additional requirements to ISO 9001
The organization’s overall policy, intentions and approach to validation shall be documented (4.2.1.1)
ඪŝ ၒ
primary packaging materials Annex C (informative) Guidance on risk management for primary packaging
materials
Brief introduction
This International Standard identifies Good Manufacturing Practice (GMP) principles and specifies requirements for a quality management system applicable to primary packaging materials for medicinal products. The realization of GMP principles in production and control of primary packaging materials within organizations is of great importance for the safety of a patient using the medicinal product, because of their direct product contact. The application of GMP for pharmaceutical packaging materials should ensure that these materials meet the needs and requirements of the pharmaceutical industry.
欧盟GMP指南中文版
QbR Frequently Asked QuestionsDisclaimer: These are general answers and may not be applicable to every product. Each ANDA is reviewed individually. This document represents the Office of Generic Drugs’s (OGD’s) current thinking on these topics.Format and SubmissionHow should QbR ANDAs be submitted?OGD’s QbR was designed with the expectation that ANDA applications would beorganized according to the Common Technical Document (CTD) format, a submissionformat adopted by multiple regulatory bodies including FDA. Generic firms are strongly recommended to submit their ANDAs in the CTD format (either eCTD or paper) tofacilitate implementation of the QbR. The ANDA Checklist for completeness andacceptability of an application for filing can be found on the OGD web page:/cder/ogd/anda_checklist.pdf .What is a QOS?The Quality Overall Summary (QOS) is the part of the CTD format that provides asummary of the CMC aspects of the application. It is an important tool to make the QbR review process more efficient.How long should a QOS be?OGD believes the CTD guidance1 recommendation of 40 pages to be an appropriatecompromise between level of detail and concision. The CTD guidance recommendation does not include tables and figures.The same information should not be included in multiple locations in the QOS. Instead of repeating information, refer to the first location of the original information in the QOS by CTD section number.Should the QOS be submitted electronically?All applications should include an electronic QOS. For paper submissions, it isrecommended that both an electronic QOS and a paper QOS be included.What file format should be used for the QOS?All applications, both eCTD and paper submissions, should have an electronic QOS. The electronic QOS should be contained in one document. Do not provide separate files foreach section or question.The electronic QOS should be provided as both a pdf and a Microsoft Word file. Microsoft Word files should be readable by Word 2003.1 Guidance for Industry M4Q: The CTD – Quality (August 2001) /cder/guidance/4539Q.htmWhat fonts should be used in the QOS?Because of FDA’s internal data management systems, please use only use these TrueType fonts: Times New Roman, Arial, Courier New. Times New Roman is recommended as the main text font.Should the applicable QbR question be presented within the body of Module 2 of the relevant section, followed by sponsor's answer?Yes, include all the QbR questions without deletion in the QOS.Can the granularity of module 3 be used in module 2?Yes, the granularity can be used for section and subsection headings. However, the QOS should always be submitted as a single file.Can color be used in the QOS?Yes, but sponsors should ensure that the QOS is legible when printed in black and white.Colored text should not be used.Is the QOS format available on OGD webpage and questions therein mandatory to be followed?For an efficient review process, OGD desires all applications to be in a consistent format.See the OGD QbR questions and example QOS:/cder/ogd/QbR-Quality_Overall_Summary_Outline.doc/cder/ogd/OGD_Model_Quality_Overall_ Summary.pdf/cder/ogd/OGD_Model_QOS_IR_Product.pdfFor amendments to applications, should the documentation consist of a revision of the QOS? Would new PD reports be required?The QOS should not be updated after submission of the original ANDA. Any additional data (including any new PD reports) should be provided as a stand alone amendment.Responses to deficiencies should be provided in electronic format as both a pdf andMicrosoft Word file.After January 2007, what will happen to an application that does not have a QOS or contains an incomplete QOS?OGD will contact the sponsor and ask them to provide a QOS. If the sponsor provides the QOS before the application comes up for review, OGD will use the sponsor’s QOS.OGD’s QbR questions represent the current thinking about what information is essential to evaluate an ANDA. Reviewers will use deficiency letters to ask ANDA sponsors thequestions that are not answered in the sponsor’s QOS.In February 2007, 75% of ANDAs submitted contained a QOS.If a question is not applicable to a specific formulation or dosage form, should the question be deleted or unanswered?Sponsors should never delete a QbR question, but instead answer as not applicable, with a brief justification. Please answer all parts of multi-part questions.For sterile injectables, to what extent should sterility assurance be covered in QOS?The current QbR was not intended to cover data recommendations for Sterility Assurance information. In the future, other summaries will cover other disciplines.MAPP 5040.1, effective date 5/24/04, specifies location of the microbiology information in the CTD format.Where in the CTD should an applicant provide comparative dissolution data between the generic and RLD?The comparison between the final ANDA formulation and the RLD should be provided in5.3.1, this comparison should be summarized in the QOS. Comparisons with otherformulations conducted during development should be included in 3.P.2.Is it possible to submit an amendment in CTD format for a product that was already submitted in the old ANDA format?No, all amendments to an application under review should use the same format as theoriginal submission.How is a paper CTD to be paginated?“Page numbering in the CTD format should be at the document level and not at the volume or module level. (The entire submission should never be numbered consecutively by page.) In general, all documents should have page numbers. Since the page numbering is at the document level, there should only be one set of page numbers for each document.”2. For paper submission, tabs locating sections and subsections are useful.For the ANDA submitted as in paper CTD format, can we submit the bioequivalence study report electronically? Or does the Agency require paper copy only?The bioequivalence summary tables should always be provided in electronic format.Will QbR lead to longer review times?Many of the current long review times result from applications that do not completelyaddress all of the review issues and OGD must request additional information through the deficiency process. This iterative process will be reduced with the use of the QbR template.Sponsors that provide a QOS that clearly and completely addresses all the questions in the QbR should find a reduction in the overall review time.Will DMFs for the drug substance be required to be in CTD if the ANDA is in CTD format?No. CTD format DMFs are recommended.What should be included in 3.2.R.1.P.2, Information on Components?COA’s for drug substance, excipients and packaging components used to produce theexhibit batch.2 Submitting Marketing Applications According to the ICH/CTD Format: General Considerations/cder/guidance/4707dft.pdfHow should an ANDA sponsor respond to deficiencies?OGD requests that sponsors provide a copy of the response to deficiencies in electronic format as both a pdf file and a Microsoft Word file.QUALITY OVERALL SUMMARY CONTENT QUESTIONS2.3 Introduction to the Quality Overall SummaryWhat information should be provided in the introduction?Proprietary Name of Drug Product:Non-Proprietary Name of Drug Product:Non-Proprietary Name of Drug Substance:Company Name:Dosage Form:Strength(s):Route of Administration:Proposed Indication(s):Maximum Daily Dose:2.3.S DRUG SUBSTANCEWhat if an ANDA contains two or more active ingredients?Prepare separate 2.3.S sections of the QOS for each API. Label them 2.3.S [API 1] and2.3.S [API 2].What if an ANDA contains two or more suppliers of the same active ingredient?Provide one 2.3.S section. Information that is common between suppliers should not be repeated. Information that is not common between suppliers (e.g. different manufacturing processes) should have separate sections and be labeled accordingly (drug substance,manufacturer 1) and (drug substance, manufacturer 2).Can information in this section be provided by reference to a DMF?See individual questions for details. As a general overview:•Information to be referenced to the DMFo Drug substance structure elucidation;o Drug substance manufacturing process and controls;o Container/closure system used for packaging and storage of the drugsubstance;o Drug substance stability.•Information requested from ANDA Sponsoro Physicochemical properties;o Adequate drug substance specification and test methods including structure confirmation;o Impurity profile in drug substance (process impurity or degradant);o Limits for impurity/residual solvent limits;o Method validation/verification;o Reference standard.2.3.S.1 General InformationWhat are the nomenclature, molecular structure, molecular formula, and molecular weight?What format should be used for this information?Chemical Name:CAS #:USAN:Molecular Structure:Molecular Formula:Molecular Weight:What are the physicochemical properties including physical description, pKa, polymorphism, aqueous solubility (as function of pH), hygroscopicity, melting point, and partition coefficient?What format should be used for this information?Physical Description:pKa:Polymorphism:Solubility Characteristics:Hygroscopicity:Melting Point:Partition Coefficient:Should all of these properties be reported? Even if they are not critical?Report ALL physicochemical properties listed in the question even if they are not critical.If a property is not quantified, explain why, for example: “No pKa because there are no ionizable groups in the chemical structure” or “No melting point because compounddegrades on heating”.What solubility data should be provided?The BCS solubility classification3 of the drug substance should be determined for oral dosage forms.Report aqueous solubility as a function of pH at 37º C in tabular form. Provide actualvalues for the solubility and not descriptive phrases such as “slightly soluble”.3 See BCS guidance /cder/guidance/3618fnl.pdf for definitionSolvent Media and pH Solubility Form I(mg/ml) Solubility Form II(mg/ml)Should pH-solubility profiles be provided for all known polymorphic forms?No, it is essential that the pH-solubility profile be provided for the form present in the drug product. The relative solubility (at one pH) should be provided for any other more stable forms.Physicochemical information such as polymorphic form, pKa, solubility, is usually in the confidential section of DMF. Is reference to a DMF acceptable for this type of information?No, knowledge of API physicochemical properties is crucial to the successful development of a robust formulation and manufacturing process. In view of the critical nature of thisinformation, OGD does not consider simple reference to the DMF to be acceptable.The Guidance for Industry: M4Q: The CTD-Quality Questions and Answers/ Location Issues says only the polymorphic form used in the drug product should be described in S.1 and other known polymorphic forms should be described in S.3. OGD’s examples placed information about all known polymorphic forms in S.1. Where does OGD want this information?This information may be included in either S.1 or in S.3. Wherever presented, list allpolymorphic forms reported in literature and provide brief discussion (i.e., which one is the most stable form) and indicate which form is used for this product.Other polymorph information should be presented by the ANDA applicant as follows: • 2.3.S.3 Characterization: Studies performed (if any) and methods used to identify the potential polymorphic forms of the drug substance. (x-ray, DSC, and literature) • 2.3.S.4 Specification: Justification of whether a polymorph specification is needed and the proposed analytical method• 2.3.P.2.1.1 Pharmaceutical Development –Drug Substance: Studies conducted to evaluate if polymorphic form affects drug product propertiesWhy does OGD need to know the partition coefficient and other physicochemical properties?Physical and chemical properties may affect drug product development, manufacture, or performance.2.3.S.2 ManufactureWho manufactures the drug substance?How should this be answered?Provide the name, address, and responsibility of each manufacturer, including contractor, and each proposed production site or facility involved in manufacturing and testing.Include the DMF number, refer to the Letter of Authorization in the body of data, andidentify the US Agent (if applicable)How do the manufacturing processes and controls ensure consistent production of the drug substance?Can this question be answered by reference to a DMF?Yes. It is preferable to mention the source of the material (synthetic or natural) when both sources are available.The DMF holder’s COA for the batch used to manufacture the exhibit batches should be provided in the body of data at 3.2.S.4.4.If there is no DMF, what information should be provided?A complete description of the manufacturing process and controls used to produce the drugsubstance.2.3.S.3 CharacterizationHow was the drug substance structure elucidated and characterized?Can structure elucidation be answered by reference to a DMF?Yes.What information should be provided for chiral drug substances?When the drug substance contains one or more chiral centers, the applicant should indicate whether it is a racemate or a specific enantiomer.When the drug substance is a specific enantiomer, then tests to identify and/or quantify that enantiomer should be included. Discussion of chirality should include the potential forinterconversion between enantiomers (e.g. racemization/epimerization).How were potential impurities identified and characterized?List related compounds potentially present in the drug substance. Identify impurities bynames, structures, or RRT/HPLC. Under origin, classify impurities as process impurities and/or degradants.Structure Origin ID ChemicalName[SpecifiedImpurity]Is identification of potential impurities needed if there is a USP related substances method?Yes.Can this question be answered by reference to a DMF?The ANDA should include a list of potential impurities and their origins. The methodsused to identify and characterize these impurities can be incorporated by reference to the DMF.According to the CTD guidance, section S.3 should contain a list of potential impurities and the basis for the acceptance criteria for impurities, however in the OGD examples this information was in section S.4. Where should it go?This information may be included in either S.3 or in S.4.2.3.S.4 Control of Drug SubstanceWhat is the drug substance specification? Does it include all the critical drug substance attributes that affect the manufacturing and quality of the drug product?What format should be used for presenting the specification?Include a table of specifications. Include the results for the batch(es) of drug substance used to produce the exhibit batch(es). Identify impurities in a footnote. Test results and acceptance criteria should be provided as numerical values with proper units whenapplicable.Tests Acceptancecriteria AnalyticalprocedureTest results for Lot#AppearanceIdentificationA:B:AssayResidualSolventsSpecified ImpuritiesRC1RC2RC3Any UnspecifiedImpurityTotal Impurities[AdditionalSpecification]*RC 1: [impurity identity]RC 2: [impurity identity]RC 3: [impurity identity]What tests should be included in the drug substance specification?USP drugs must meet the USP monograph requirements, but other tests should be included when appropriate. For USP and non USP drugs, other references (EP, BP, JP, the DMF holder’s specifications, and ICH guidances) can be used to help identify appropriate tests.Only relevant tests should be included in the specification. Justify whether specific tests such as optical rotation, water content, impurities, residual solvents; solid state properties(e.g. polymorphic form, particle size distribution, etc) should be included in thespecification of drug substance or not.Does OGD accept foreign pharmacopeia tests and criteria for drug substances?There are several examples where a drug substance is covered by a monograph in EP or JP, but not in the USP. ANDA and DMF holders can obtain information regardingphysicochemical properties, structure of related impurities, storage conditions, analytical test methods, and reference standards from EP or JP to support their submission to OGD.Although the USP remains our official compendium, we usually accept EP when the drug substance is not in USP (However, a complete validation report for EP methods should be provided in the ANDA).For each test in the specification, is the analytical method(s) suitable for its intended use and, if necessary, validated? What is the justification for the acceptance criterion?What level of detail does OGD expect for the analytical method justifications and validations?Provide a summary of each non-USP method. This can be in a tabular or descriptive form.It should include the critical parameters for the method and system suitability criteria ifapplicable. See an example in section 2.3.P.5 of this document.For each analytical procedure, provide a page number/link to the location of validationinformation in Module 3. For quantitative non-compendial analytical methods, provide a summary table for the method validation. See an example in section 2.3.P.5 of thisdocument.Is validation needed for a USP method?No, but USP methods should be verified and an ANDA sponsor should ensure that theUSP assay method is specific (e.g. main peak can be separated from all process impurities arising from their manufacturing process and from degradation products) and the USPrelated substance method is specific (e.g. all the process impurities and degradants can be separated from each other and also separated from main peak).Is validation needed if the USP method is modified or replaced by an in-house method?Yes. Data supporting the equivalence or superiority of the in-house method should beprovided. In case of a dispute, the USP method will be considered the official method.Is reference to the DMF for drug substance analytical method validations acceptable?No. ANDA sponsors need to either provide full validation reports from the ANDA holder or reference full validation reports from the DMF holder (provided there is a copy of the method validation report in the ANDA and method verification from the ANDA holder). AppearanceIdentityAssayImpurities (Organic impurities)What format should be used for related substances?List related compounds potentially present in the drug substance. (Either here or S.3)Name Structure Origin[SpecifiedImpurity]Provide batch results and justifications for the proposed acceptance criteria. See guidance on ANDA DS impurities4 for acceptable justifications. If the DS is compendial, include the USP limits in the table. If the RLD product is used for justification/qualification, then its results should also be included. If an ICH justification is used, then the calculation of the ICH limits should be explained.To use the ICH limits, determine the Maximum Daily Dose (MDD) indicated in the label and use it to calculate the ICH Thresholds: Reporting Threshold (RT), Identification1. The amount of drug substance administered per day2. Higher reporting thresholds should be scientifically justified3. Lower thresholds can be appropriate if the impurity is unusually toxicSponsors can use the ICH limits to ensure the LOQ for the analytical method is equal or below the RT, establish the limit for “Any Unspecified Impurity” to equal or below the IT, and establish limits for each “Specified Identified Impurity” and each “SpecifiedUnidentified Impurity”5 to equal or below the QT.An impurity must be qualified if a limit is established above the QT. Options forqualification include reference to the specific impurity listed in a USP monograph,comparison to the RLD product, identifying the impurity as a significant metabolite of the drug substance, literature references including other compendial monographs (EP, BP, JP), or conducting a toxicity study.4/cder/guidance/6422dft.pdf5 The ANDA DS guidance states “For unidentified impurities to be listed in the drug substance specification, we recommend that you clearly state the procedure used and assumptions made in establishing the level of the impurity. It is important that unidentified specified impurities be referred to by an appropriate qualitative analytical descriptive label (e.g., unidentified A, unidentified with relative retention of 0.9)”. Q3A(R) states “When identification of an impurity is not feasible, a summary of the laboratory studies demonstrating the unsuccessful effort should be included in the application.”Name DrugSubstance(Lot #)USPLimit forDrugSubstanceRLDDrugProduct(Lot #)ProposedAcceptancecriteriaJustification[Specified Impurity,Identified][BatchResults][BatchResults][SpecifiedImpurity,Unidentified]Any UnspecifiedImpurityTotalImpuritiesInclude the column for RLD drug product only if that data is used to justify the drugsubstance limit (example a process impurity that is also found in the RLD).What is OGD’s policy on genotoxic impurities?FDA is developing a guidance for genotoxic impurities. According to the ICH Q3A lower thresholds are appropriate for impurities that are unusally toxic.If impurities levels for an approved generic drug are higher than the RLD, can the approved generic drug data be used as justification for a higher impurity specification?According to ANDA DP and DS Impurity guidances, any approved drug product can be used to qualify an impurity level. However, the guidances qualify this by later stating “This approved human drug product is generally the reference listed drug (RLD). However, you may also compare the profile to a different drug product with the same route ofadministration and similar characteristics (e.g. tablet versus capsule) if samples of thereference listed drug are unavailable or in the case of an ANDA submitted pursuant to a suitability petition.”What if there are no impurities’ tests found in the USP monograph for a USP drug substance? What should the ANDA sponsor do?Please work with your supplier (DMF Holder) to ensure that potential synthetic process impurities (e.g. isomers (if any), side reaction products), degradation impurities, metalcatalysts, and residual solvents are adequately captured by your impurities test method.There may be information available in published literature as well, regarding potentialimpurities.Can levels of an impurity found in the RLD and identified by RRT be used for qualification?Qualification of a specified unidentified impurity by means of comparative RRT, UVspectra, and mass spectrometry with the RLD may be acceptable. However, the ANDAsponsor should make every attempt to identify the impurity.If levels are higher than in an approved drug product then the sponsor should provide data for qualification of the safety of this impurity at this level.Can a limit from a USP monograph for “any unspecified impurity” be used to justify a limit for “any unspecified impurity” greater than the ICH Q3 identification threshold?No. Any unspecified impurity (any unknown) limit should not exceed ICH Q3A “IT”based on MDD. Non-specific compendial acceptance criteria (e.g. Any Individual Impurity is NMT 0.5%) should not be used for justification of proposed impurity acceptance criteria.However, if the USP limit is less than the ICH threshold, then the USP limit should be used. Can a limit for an identified impurity in the drug substance be qualified with data obtained from RLD drug product samples treated under the stressed conditions?No. Test various samples of marketed drug product over the span of its shelf life (ideally, near the end of shelf-life). Data generated from accelerated or stressed studies of the RLD is considered inappropriate.Impurities (Residual Solvents)Will OGD base residual solvent acceptance limits on ICH limits or process capability?The ICH guidance on residual solvents6 provides safety limits for residual solvents but also indicates that “residual solvents should be removed to the extent possible”. ANDA residual solvent limits should be within the ICH safety limits, but the review of the ANDA includes both of these considerations.OGD generally accepts the ICH limits when they are applied to the drug product.What about solvents that are not listed in Q3C?Levels should be qualified for safety.Impurities (Inorganic impurities)Polymorphic FormWhen is a specification on polymorphic form necessary?See ANDA polymorphism guidance7 for a detailed discussion.Particle SizeWhen is a drug substance particle size specification necessary?A specification should be included when the particle size is critical to either drug productperformance or manufacturing.For example, in a dry blending process, the particle size distribution of the drug substance and excipients may affect the mixing process. For a low solubility drug, the drug substance particle size may have a critical impact on the dissolution of the drug product. For a high solubility drug, particle size is often not critical to product performance.6 /cder/guidance/Q3Cfnl.pdf7/cder/guidance/6154dft.pdfWhat justification is necessary for drug substance particle size specifications?As for other API properties, the specificity and range of acceptance criteria for particle size, and the justification thereof, could vary from none to very tight limits, depending upon the criticality of this property for that drug product.Particle size specifications should be justified based on whether a change in particle sizewill affect the ability to manufacture the product or the final product performance.In general, a sponsor either should demonstrate through mechanistic understanding orempirical experiments how changes in material characteristics such as particle size affecttheir product.In the absence of pharmaceutical development studies, the particle size specificationshould represent the material used to produce the exhibit batch.When should the particle size be specified as distribution [d90,d50,d10] and when is a single point limit appropriate?When critical, a particle size should be specified by the distribution. There may be othersituations when a single point limit can be justified by pharmaceutical development studies.2.3.S.5 Reference StandardsHow were the primary reference standards certified?For non-compendial, in-house reference standards, what type of qualification data is recommended? Will a COA be sufficient?COA should be included in Module 3, along with details of its preparation, qualification,and characterization. This should be summarized in the QOS.In terms of the qualification data that may be requested, it is expected that these reference standards be of the highest possible purity (e.g. may necessitate an additionalrecrystallization beyond those used in the normal manufacturing process of the activeingredient) and be fully characterized (e.g. may necessitate in the qualification reportadditional characterization information such as proof of structure via NMR) beyond theidentification tests that are typically reported in a drug substance COA. StandardLaboratory Practice for preparation of reference standards entails recrystallization toconstant physical measurements or to literature values for the pure material.2.3.S.6 Container Closure SystemWhat container closure is used for packaging and storage of the drug substance?Can this question be answered by reference to a DMF?Yes.。
Professional Certifications
Raymond R. Curci 1810 Lakeshore Lane Tallahassee, FL 32312email: raycurci@ date: Jun 11, 2002Education:6/1981 - 5/1986, 6/1988 - 8/1989Bachelors in Computer Science with minor in Math at Florida State University, graduated 8/1989.8/1989 - 12/1997, 5/2000-12/2000Masters in Computer Science (Computer Network and System Administration Track) at Florida StateUniversity, graduated 12/2000.Masters project: FSU Computer Science Internet Teaching Lab /~curci/itl. Professional Memberships:• ACM (Association for Computing Machinery)• IEEE Computer Society (Institute of Electrical and Electronics Engineers)• USENIX (Advanced Computing Systems Professional Association)• MAA (Mathematical Association of America)• UPE (Computer Science Honor Society inducted Jan 1992)Professional Certifications:• Cisco WLAN - Cisco Wireless LAN FE/SE Certification Dec 2001• CCIE #8037 – Cisco Certified Internetwork Expert (Cisco Systems) Aug 2001• CCCS/CQS – Cisco Cable Communications Specialist (Cisco Systems) Jun 2001• MCP – Microsoft Certified Professional (#1689214) Jun 2000• CNX#1089 – Certified Network Expert (CNX Consortium) Aug 1999• CCDP – Cisco Certified Design Professional (Cisco Systems) Aug 1999• CCDA – Cisco Certified Design Associate (Cisco Systems) Aug 1999• CCNP – Cisco Certified Network Professional (Cisco Systems) Jul 1999• CCNA – Cisco Certified Network Associate (Cisco Systems) Jul 1999Teaching Experience:• Florida Public Library Technology Workshop (seminar), May 2002“Is Wireless in Your Future? LANs, WANs, and Digital Canopies”“Internet Filtering Software – The Technical Side”Skills:• Over 21 years of computer network, data communications, and computer hardware/software engineering experience.• Extensive routing and switching multiprotocol computer network WAN/MAN/LAN design and hands-on implementation experience.• Extensive routing and switching troubleshooting experience with Cisco and other equipment.• Extensive UNIX system administration, systems programming, and scripting experience.• Extensive computer software/firmware development experience in UNIX, PC, and mainframe environments.• Project management experience.• Project lead experience.• Supervisory experience.• Computer hardware design experience.• Network service provider presales experience.• Excellent interpersonal, written communication and presentation skills.Experience:1/2001- PresentSr. Advanced Network Systems Engineer, Hayes Computer Systems, Tallahassee FL.• Responsible for Internet and core backbone networking.• Maintain, upgrade, and troubleshoot core Cisco routers, Catalyst switches, ATM switches.• Design and deploy new CMTS/cable modem/gigabit switching infrastructure in Tallahassee for Comcast Cable.• Design and deploy wireless LAN and point-to-point networks for various customers (Digital Canopy, FL Capitol Building, FL House of Rep., Supreme Court, Leon County Courthouse, FL EmergencyOperations Center, Bay County Courts, etc.)• Architect new network designs.2/2000- 1/2001Sr. Network Engineer (Specialist – Computer System Control), Academic Computing and Network Services, Florida State University.• Responsible for Internet and core backbone networking for Florida State University.• Maintain, upgrade, and troubleshoot core Cisco routers, Catalyst switches, ATM switches, 3COM routers/switches, Cabletron routers/switches, firewalls, web cache engines, etc.3/2000- 12/2000Network Engineer Consultant, Capital City Bank Group (Nasdaq:CCBG), Tallahassee FL.• Responsible for engineering a 50+ Cisco router multiprotocol frame-relay network.• Responsible for configuring Linux tools to monitor the network status, collect utilization statistics, archive router configurations, collect system error logs, etc.8/1999-1/2000Sr. Network Engineer/ANS, Advanced Network Systems, Sprint (Nasdaq:FON) Tallahassee FL.• Overall responsibility for network architecture design for the Metropolitan Area Network (MAN) and Internet services in Northern Florida.• Responsible for daily operations and additions to Tallahassee MAN and Internet services including 500+ routers.• Responsible for design and deployment of statewide backbone data networks employing OC12/OC3/T3/T1 ATM, FDDI, Ethernet, Frame-Relay, ISDN, TCP/IP and other technologies. • Responsible for assisting network engineers, NOC personnel, and field technicians as fourth level technical support.• Responsible for managing major projects with $3M capital and $1M expense annual budget.• Assist BMO sales organization with technical issues on sales calls, demonstrations, solving high visibility customer problems, responding to requests for proposals, etc.• Prototype multivendor networks in lab for interoperability testing, proof of concept testing, bug troubleshooting, etc.• Responsible for custom enterprise network design and troubleshooting on strategic customer accounts. 10/1997- 8/1999Sr. Engineering Manager, Advanced Network Systems, Sprint Southern Operations, Tallahassee FL.• Responsible for Metropolitan Area Networks in Tallahassee, Ft. Walton Beach, Crestview, Panama City, Pensacola, and Marianna.• Responsible for deploying OC12/OC3/T3/T1ATM and TCP/IP backbone computer networks.• Responsible for managing our custom commercial World Wide Web server offering.• Responsible for writing business cases for potential new projects and presenting them to management. • Responsible for third level technical support.• Responsible for some UNIX system management on SunOS, Solaris, and IRIX platforms for network services such as DNS, NEWS, NTP3, TFTP, etc.• Supervised a team of Data Systems Technicians.3/1996 - 10/1997Sr. Project Manager, Advanced Network Systems, Sprint-Florida, Tallahassee FL.• Responsible for Metropolitan Area Networks in Tallahassee, Ft. Walton Beach, and Ocala.• Responsible for deploying T3/T1ATM and TCP/IP backbone computer network in 17 Florida cities. • Responsible for managing our custom commercial World Wide Web server offering.• Responsible for writing business cases for potential new projects and presenting them to management. • Responsible for 3rd level technical support.• Responsible for some UNIX system management on SunOS, Solaris, and IRIX platforms for network services such as DNS, NEWS, NTP3, TFTP, etc.• Supervised a team of Data Systems Technicians.Project Manager / Project Engineer, Advanced Network Systems, Sprint/Centel-Florida, Tallahassee FL. • Primary responsibility for managing a project involving the deployment of a wide area TCP/IP network to provide dialup Internet service throughout Florida.• Responsible for supervising the system management of commercial Internet e-mail and World Wide Web servers.• Supervision of computer programmer consultants involved in custom World Wide Web development. • Involved in marketing activities with respect to providing statewide Internet access to large professional organizations.11/1993 - 12/1995Research Faculty / Head of the Computer and Network Facility, National High Magnetic Field Laboratory, Tallahassee FL.• Responsible for training and management of the NHMFL Computer Support Group, a group of nine computer support specialists within the Operations department and setting lab policy relating tocomputers and networks.• Primary responsibility for computer security including the operation of a network firewall, system security logs and network security.• Responsible for assessing needs and ordering all computer and network related hardware and software. • Primary responsibility for all 17 Local Area Networks, WAN connections, and dialup PPP/SLIP connections. (FDDI, Ethernet 10baseT, Ethernet 10base2, Ethernet 10baseF and Localtalk hardware using the TCP/IP, AppleTalk, and IPX protocols).• Responsibility for all voice and data network wiring and pathways (including horizontal and vertical copper and fiber cable specification, supervision of installation, documentation, and maintenance of 12 wiring rooms in three buildings).• Primary responsibility for establishing a UNIX-based World Wide Web server and top-level HTML pages including CGI scripts clickable image maps, etc.• Responsible for maintenance of several UNIX servers including Sun SPARC, Dec Alpha, Silicon Graphics, and IBM RS6000 including DNS, HTTPD, POP3, SMTP, SNMP, NTP, PCNFS, NFS, LPD.• Authored several UNIX system programs and scripts for automating creation of user accounts, mailing lists, DNS consistency checks, process status, WWW mirroring, printer server management, disk quota checking, automated random password generation, etc.• Member of the FSU Computer Network Committee.11/1994 - 1/1995Independent Consultant, FSU Center for Education Technology, Tallahassee FL.• Analysis of Internet computer hacker break-ins on Sun UNIX server • Secure setup of new Solaris departmental server to replace compromised system.• Ongoing system management functions for departmental server.Senior Computer Engineer, National High Magnetic Field Laboratory, Tallahassee FL.• Responsible for all computers, all data communications, and all data acquisition systems within the facility from inception.• Designed and supervised the installation of a facility wide 10baseT and fiber optic computer network. • Evaluated, installed, and maintain network communications software and hardware including TCP/IP routers, Localtalk protocol converters, 10baseT concentrators, dialup modems, and terminal servers. • Evaluated, installed, and maintain SNMP Network Management System.• Evaluated, installed, and maintain UNIX-based Sun fileserver, mailserver, nameserver, printer server, etc.• Install, maintain, and manage IBM RS6000 compute server/workstations.• Evaluated IBMPC and MACINTOSH personal computer electronic mail software, network software, ethernet adapters, etc., and trained assistants to install them.• Installed and maintained a TAMU Drawbridge firewall system to protect the lab from the FSU and Internet communities.12/1991 - 4/1992Independent Consultant, National High Magnetic Field Laboratory, Tallahassee FL.• Designed and implemented custom "C" Language software for an IBMPC-based IEEE-488 data acquisition system to collect, graph, and store real-time pulsed-magnet experimental data.6/1990 - 5/1992Research Faculty / Assistant in Research, Program in Structural Biology / Molecular Biophysics, Florida State University.• Designed, installed, and maintained departmental local area computer network.• Managed SGI IRIS, Sun, DEC VAX, and NeXT computer workstations.• Interfaced computer systems to control and/or collect data from various lab instruments including muscle tissue dynamics apparatus, spectrophotometer, MSP, infrared VCR controller, animal enclosure lighting controller, photon counter, and video frame grabber.• Installed software, upgraded hardware, debugged problems, and train personnel on IBM PC and Macintosh microcomputers.• Attended FSU Networking Committee and Building Wiring Subcommittee meetings to help with campus-wide long range planning.7/1988 - 12/1993Independent Software / Engineering Consultant, Johnson & Johnson Vistakon Division (NYSE:JNJ), Jacksonville FL.• Design of custom instrumentation for R&D group including humidity controller, radiometer controller, ultrasonic contact lens thickness gauge, molding machine monitor, contact lens eye tracker, contact lens automated inspection system.• Design and installation of TCP/IP LAN and departmental Sun 4/390 server, SPARCstations, NIC cards, PC-NFS suite, etc., for R&D group.• Design and implementation of Pilot Line custom multi-user XENIX contact lens inspection and database systems.6/1988 - 12/1988Independent Consultant, International Terminals and Computers, Inc., Tallahassee FL.• Wrote firmware in z80 assembly language for ITC's LC-1 intelligent Unisys compatible computer terminal.• Debugged and customized an 8086 assembly language PC-MOS device driver to convert LC-1 terminals into MS-DOS workstations for the Bank of Mexico.3/1988 - 6/1990Research Faculty / Assistant in Research - Computer Network Specialist, Supercomputer Computations Research Institute, Florida State University.• Maintained TCP/IP and DECnet computer networks.• Installed and configured network software on a variety of minicomputer and microcomputers.• Diagnosed, isolated, and corrected network failures.• Answered questions concerning E-mail, file transfer between heterogeneous systems, and other network related questions.• Provided system management for a variety of UNIX (SunOS,Ultrix,IRIX,AIX) and VAX/VMS systems.• Handled problems and answered technical questions concerning the IBM PC family of computers. • Sat on the SCRI Local Systems Committee to help make decisions on computer equipment purchases. 5/1985 - 3/1988Senior Systems Programmer, Techna Vision Inc., San Diego CA.• Worked as a member of a team of 25 computer programmers, with primary responsibility for all systems software design.• Wrote firmware, device drivers, and other low level software for use in a variety of medical instruments in "C", FORTRAN, 8086, and 8051 languages.• Designed a master / slave communication system for use between the instrument control computer and embedded microprocessors.• Helped with debugging and design of custom computer hardware.• Wrote geometric / optical calibration, photometric calibration, and Q/A test software.• Visited beta test sites, trade shows, and sales training sessions to install prototype instruments and answer technical questions.• Managed small Microport SystemV UNIX UUCP e-mail and BBS server.1/1985 - 5/1985Contract Programmer, Psychology Department, Florida State University.• Designed and implemented a software system to digitize CAT scan slides, store and retrieve images from disk, and compute statistics.• Wrote software to display images in pseudocolor on both a color graphics terminal and high-resolution color ink jet printer.9/1981 - 5/1985Senior Programmer, Muench Center for Color Computer Graphics, Florida State University.• Acted as an assistant to the director.• Led a staff of five on various color computer graphics software projects.• Wrote several systems programs to interface heterogeneous computer systems out of necessity.• Configured a 40-seat student computer lab with color graphics terminals, CP/M microcomputers, X.25 PAD for mainframe access, and shared color printers.• Instructed gifted summer high school students in the "C" language and color computer graphics.• Helped install and customize a computer network of Sun UNIX workstations (Sun2/50) and fileservers (Sun2/170) for the Computer Science Department (Su82).• Ported many microcomputer color graphics applications to the sun workstation platform.6/1981 - 9/1981Student Assistant, Mesoscale Air-Sea Interaction Group (COAPS), Florida State University.• Worked on NOAA project to study El Nino phenomena.• Drew wind stress vector maps of Pacific Ocean from Navy data.• Interpolated / digitized data from maps.• Entered data into CDC Cyber mainframe for analysis and color graphic movie production. Familiarity with the following systems:Computer Hardware• CISCO TCP/IP multiprotocol network routers(16xx,17xx,25xx,26xx,3550,36xx,7000,72xx,75xx,uBR924,uBR7246VXR,5500+RSM,6000+MSFC).• CacheFlow 5070 web cache engine, Cisco CE2050, CE560, CE507 Cache Engine/Content Engine.• Cisco IP phones, call manager, and voice mail systems.• Redback SMS500 DSL aggregation devices.• Extreme Summit 48 switch/router.• XYLAN/Alcatel OMNI-Switch routers / ATM Switches.• Network Systems/StorageTek DXE and Borderguard Routers.• Network Associates/Network General Sniffers (Ethernet, Fast Ethernet, T1, FDDI, Token Ring, ATM, and remote sniffer platforms.)• CISCO/US Robotics TCP/IP terminal servers (AS5100, AS5200, AS5300, TACACS).• CISCO PIX Firewall / NAT, Cisco IOS Firewall Feature Set.• CISCO Catalyst 6000,5000, 5500, 3550, 3500XL, 2950, 2948G, 2900XL, 2800, 1900, 3920 ethernet and token ring switches.• CISCO LRE 2900, Tut Systems 5000 VDSL systems.• CISCO LightSteam 1010 ATM switches• 3COM NBX 100 phone systems• Alantec PowerHUB TCPIP/IPX/Appletalk multiprotocol network routers.• Cabletron SSR-8000 routers.• 3COM 9000 routers.• Telebit NetBlazer TCP/IP network routers.• Cayman GATORBOX AppleTalk network routers.• TAMU TCP/IP network firewall.• Assorted modems and other asynchronous communications equipment.• Hewlett-Packard and IBM family of E-size sheet feed and roll pen and ink jet plotters (i.e. HP DesignJet 755CM).• IBM RISCsystem 6000 model 320H, 340, 375, 560, 580 (RS6000 Power and Power2 architectures). • Silicon Graphics IRIS 4D/35, Indy, Indigo2, Power Challenge.• Sun Microsystems Sun-2, Sun-3, Sun-4 (SPARC).• Apple Workgroup Server 95, 9150/120.• DEC Alpha UNIX systems.• DEC VAX 8700, 11/780, µVAX, 3500, VS3100, VS3200, etc.• DEC DECstation 3100, 5000, etc.• NEXT NeXTstation Color Workstation, Cube, etc.• IBM PC/XT/AT/386/486/PS2/Pentium and compatible family of microcomputers.• Macintosh 68K family of microcomputers.• Macintosh PowerPC 6100, 7100, 8100, 9150 RISC-based systems.• IBM SP2, Cray YMP, Thinking Machines CM-2, ETA Systems ETA10, CDC Cyber 205 supercomputers.• Control Data Cyber 730, 760, and 835 mainframes.• Datavue 3000, Xerox 820, Intecolor 8064, and other CP/M microcomputers.• Huntsville Microsystems 8088, z80, and 8051 family of in-circuit emulators.• Teltone ISDN BRI simulators.• Western Multiplex Tsunami 5.8GHz wireless bridges• 802.11 Wireless Access Points and Bridges (Cisco Aironet, Enterasys, Linksys, Lucent/Orinoco) • Micromint Intel 8051 family microcontroller single board computers.Operating Systems / Environments• UNIX including internals (Solaris8, SunOS 4.1.4, SGI IRIX, DEC OSF/1, BSD, Microport System V, IBM AIX, SCO Xenix, DEC Ultrix, Minix, Mach, Linux Redhat, Linux Slackware, FreeBSD,OpenBSD, NetBSD, Apple A/UX, POSIX, LSI-11 Venix, DEC Eunice).• MIT X Window System, DECwindows, Sun Open Windows, IBM AIXwindows, MacX, Netmanage Chameleon UNIX Link, GNOME, KDE.• Microsoft Win95, Win98, NT 4.0, Win2K, WinMe, WinXP (client and server).• Macintosh System 7.5.• DEC VAX/VMS (VMS v4.7, v5.2, DECwindows, LAVc).• Microsoft MS-DOS, WINDOWS, WINDOWS for Workgroups.• SNMP Management Systems (Seagate Netlabs, HP OpenView, SunNet Manager)• IBM OS/2 v2.1, OS/2 WARP.• Digital Research FLEX/OS, CP/M, MP/M, CP/M-86, CP/NET.• Software Link PC-MOS (multiuser DOS emulator).• Control Data NOS, PLATO (Programmed Logic for Automated Teaching Operations).• Intelligent Systems FCS.• Cisco IOS, Cisco Catalyst XDI, Redback AOS.Protocols• TCP/IP (DNS, SNMP, SMTP, POP3, NTP3 , HTTP, FTP, BOOTP, SLIP,NFS,DHCP,NAT,IPSEC,GRE,SYSLOG,NTP)• Cisco (CBAC, NBAR, CAR, CB-WFQ, GTS, GRE, NAT, HSRP, Modular QoS)• Multicast (PIM, IGMP, CGMP)• DOCSIS (Data Over Cable Systems Interface Specifications)• PPP (PAP, CHAP, MS-CHAP)• IP Routing (OSPF, EIGRP, BGP4, RIP1, RIP2, IGRP, IS-IS)• IPX (RIP, NSLP)• ISDN Q921, Q931• MPLS/VPNs, Tag Switching• IBM SNA/SDLC/DLSW+/BISYNC/SRB/SRTB• AppleTalk Phase2 (EtherTalk, LocalTalk, ARA)• DECNET, LAT, LAVc.• ISL, 802.1Q, 802.10/SDE Trunking• 802.11b Wireless• IEEE Std 802.1D-1998 Spanning Tree, BPDUs, etc.Languages• HTML 2.0, 3.0, CGI• "C" (GCC, Microsoft C, Turbo C, Borland C++, Aztec, K&R, ANSI, SGI Power C).• FORTRAN (FORTRAN-V, FORTRAN-IV, VAX FORTRAN, F77, Microsoft FORTRAN).• Pascal (Turbo Pascal, CDC Pascal, Sun UNIX Pascal).• Assembly Language (8086/80286/80386/80486, 8080/z80, MCS-51, COMPASS).• BASIC (Microsoft BASIC, Quick BASIC).• CLIPS (NASA Forward Chaining Expert System).• M5 (Backchaining Expert System Shell).• TUTOR (CDC PLATO SYSTEM Language).• COBOL (CDC Cyber Version).• DCLs (Bourne Shell, C-Shell, Korn Shell, VAX DCL).• PERL, TCL, EXPECT• Some exposure to Oracle Forms, Reports, SQL*NET.• Some exposure to Novell Netware 386.• Some exposure to Banyan VINES system management.Application Software• Word Processors (Microsoft Word, WordPerfect, WordStar, TROFF, TeX)• Spreadsheets (Microsoft Excel, Lotus 1-2-3, Twin, Borland Quattro Pro)• WWW (Adobe PageMill, Adobe SiteMill)• CAD (Autodesk AutoCAD, Generic CADD, Turbo CAD, Claris CAD)• Drawing (Visio Pro, Adobe Illustrator, Claris Draw, MacDraw Pro, Inspiration, Corel Draw) • Graphics/Map (MapLynx, Route66, Adobe Photoshop, Corel Trace, Envision-It)• E-mail (Eudora, Elm, BSD Mail, Banyan Mailman, Banyan BeyondMail,MS Outlook)• Presentation (Microsoft PowerPoint, Lotus Freelance)• Project Planning (Microsoft Project, Claris Project)• Database (FileMaker Pro, Dbase III+, Oracle, Microsoft Access)References:• Available upon request.。
FDA Quality by Design Example for Generic Modified Release Drug Products
12
Past/Present Paradigm
QbD MR Example
QTPP: Guiding Quality Surrogates Used in the Development of the ANDA Formulation and Process Equivalent to the RLD
DESIGN Formulation and Process
IDENTIFY Critical Material Attributes and Critical Process Parameters
CONTROL Materials and Process
TARGET
DESIGN
IMPLEMENTATION 10
/sites/default/files/DraftExampleQbDforMRTablet%20Ap ril%2026.pdf
2. Vetted Extensively within the Agency. Three Workshops with the US Generic Pharmaceutical Association (2011)
3. Intended to illustrate the types of development studies ANDA applicants may use as they implement QbD for these complex products. Provide a concrete illustration of the QbD principles from ICH Q8(R2)
Design Design Testing
Testing
6
What Do Really Mean by QbD? What are Regulator’s Expectations for QbD?
生物等效性研究指导原则 英文版
Technique Guideline for Human Bioavailability and BioequivalenceStudies on Chemical Drug ProductsContents(Ⅰ) Establishment and Validation for Biological Sample Analysis Methods (2)1. Common Analysis Methods (2)2. Method Validation (2)2.1 Specificity (2)2.2 Calibration Curve and Quantitative Scale (3)2.3 Lower Limit of Quantitation (LLOQ) (3)2.4 Precision and Accuracy (4)2.5 Sample Stability (4)2.6 Percent recovery of Extraction (4)2.7 Method Validation with microbiology and immunology (4)3. Methodology Quality Control (5)(Ⅱ) Design and Conduct of Studies (5)1. Cross-over Design (5)2. Selection of Subjects (6)2.1 Inclusion Criteria of Subjects: (6)2.2 Cases of Subjects (7)2.3 Division into Groups of the Subjects (7)3. Test and Reference Product, T and R (8)4. Sampling (8)(Ⅲ) Result Evaluation (9)(Ⅳ) Submission of the Contents of Clinical Study Reports (9)Technique Guideline for Human Bioavailability and BioequivalenceStudies on Chemical Drug ProductsSpecific Requirements for BA and BE Studies(Ⅰ) Establishment and Validation for Biological Sample Analysis MethodsBiological samples generally come from the whole blood, serum, plasma, urine or other tissues. These samples have the characteristics such as little quantity for sampling, low drug concentration, much interference from endogenous substances, and great discrepancies between individuals. Therefore, according to the structure, biological medium and prospective concentration scale of the analytes, it is necessary to establish the proper quantitative assay methods for biological samples and to validate such methods.1. Common Analysis MethodsCommonly used analysis methods at present are as follows: (1) Chromatography: Gas Chromatography(GS), High Performance Liquid Chromatography (HPLC), Chromatography-mass Spectrometry (LC-MS, LC-MS-MS, GC-MS, GC-MS-MS), and so on. All the methods above can be used in detecting most of drugs; (2) Immunology methods: radiate immune analysis, enzyme immune analysis, fluorescent immune analysis and so on, all these can detect protein and polypeptide; (3) Microbiological methods: used in detecting antibiotic drug.Feasible and sensitive methods should be selected for biologic sample analysis as far as possible.2. Method ValidationEstablishment of reliable and reproducible quantitative assay methods is one of the keys to bioequivalence study. In order to ensure the method reliable, it is necessary to validate the method entirely and the following aspects should be generally inspected:2.1 SpecificityIt is the ability that the analysis method has to detect the analytes exactly and exclusively, when interference ingredient exists in the sample. Evidences should be provided that the analytes are the primary forms or specific active metabolites of the test drugs. Endogenous instances, the relevant metabolites and degradation products in biologic samples should not interfere with the detection of samples. If there are several analytes, each should be ensured not to be interfered, and the optimal detecting conditions of the analysis method should be maintained. As for chromatography, at least 6 samples from different subjects, which include chromatogram of blank biological samples, chromatogram of blank biologic samples added control substance (concentration labeled) and chromatogram of biologic samples after the administration should beexamined to reflect the specificity of the analytical procedure. As for mass spectra (LC-MS andLC-MS-MS) based on soft ionization, the medium effect such as ion suppression should be considered during analytic process.2.2 Calibration Curve and Quantitative ScaleCalibration curve reflects the relationship between the analyte concentration and the equipment response value and it is usually evaluated by the regression equation obtained from regression analysis (such as the weighted least squares method). The linear equation and correlation coefficient of the calibration curve should be provided to illustrate the degree of their linear correlation. The concentration scale of calibration curves is the quantitative scale. The examined results of concentration in the quantitative scale should reach the required precision and accuracy in the experiment.Dispensing calibration samples should use the same biological medium as that for analyte, and the respective calibration curve should be prepared for different biological samples. The number of calibration concentration points for establishing calibration curve lies on the possible concentration scale of the analyte and on the properties of relationship of analyte/response value. At least 6 concentration points should be used to establish calibration curve, more concentration points are needed as for non-linear correlation. The quantitative scale should cover the whole concentration scale of biological samples and should not use extrapolation out of the quantitative scale to calculate concentrations of the analyte. Calibration curve establishment should be accompanied with blank biologic samples. But this point is only for evaluating interference and not used for calculating. When the warp* between the measured value and the labeled value of each concentration point on the calibration curve is within the acceptable scale, the curve is determined to be eligible. The acceptable scale is usually prescribed that the warp of minimum concentration point is within ±20% while others within ±15%. Only the eligible calibration curve can be carried out for the quantitative calculation of clinical samples. When linear scale is somewhat broad, the weighted method is recommended to calculate the calibration curve in order to obtain a more exact value for low concentration points. ( *: warp=[(measured value - labeled value)/labeled value]×100%)2.3 Lower Limit of Quantitation (LLOQ)Lower limit of quntitation is the lowest concentration point on the calibration curve, indicating the lowest drug concentration in the tested sample, which meets the requirements of accuracy and precision. LLOQ should be able to detect drug concentrations of samples in 3~5 eliminationhalf-life or detect the drug concentration which is 1/10~/20 of the C max. The accuracy of the detection should be within 80~120% of the real concentration and its RSD should be less than 20%. The conclusions should be validated by the results from at least 5 standard samples.2.4 Precision and AccuracyPrecision is, under the specific analysis conditions, the dispersive degree of a series of the detection data from the samples with the same concentration and in the same medium. Usually, the RSD from inter- or intra- batches of the quality control samples is applied to examine the precision of the method. Generally speaking, the RSD should be less than 15% and that around LLOQ should be less than 20%. Accuracy is the contiguous degree between the tested and the real concentrations of the biological samples (namely, the warp between the tested and the real concentrations of the quality-controlled samples). The accuracy can be obtained by repeatedly detecting the analysis samples of known concentration which should be within 85~115% and which around LLOQ should be within 80~120%.Generally, 3 quality-control samples with high, middle and low concentrations are selected for validating the precision and accuracy of the method. The low concentration is chosen within three times of LLOQ, the high one is close to the upper limit of the calibration curve, and the middle concentration is within the low and the high ones. When the precision of the intra-batches is detected, each concentration should be prepared and detected at least 5 samples. In order to obtain the precision of inter-batches, at least 3 qualified analytical batches, 45 samples should be consecutively prepared and detected in different days.2.5 Sample StabilityAccording to specific instances, as for biological samples containing drugs, their stabilities should be examined under different conditions such as the room temperature, freezing, thaw and at different preservation time, in order to ensure the suitable store conditions and preservation times. Another thing that should be paid attention to is that the stabilities of the stock solution and the analyte in the solution after being treated with, should also be examined to ensure the accuracy and reproducibility of the test results.2.6 Percent recovery of ExtractionThe recovery of extraction is the ratio between the responsive value of the analytes recovered from the biological samples and that of the standard, which has the same meaning as the ratio of the analytes extracted from the biologic samples to be analyzed. The recovery of extraction of the 3 concentrations at high, middle and low should be examined and their results should be precise and reproduceable.2.7 Method Validation with microbiology and immunologyThe analysis method validation above mainly aims at chromatography, with many parameters and principles also applicable for microbiological and immunological analysis. However, some special aspects should be considered in the method validation. The calibration curve of the microbiological and immunological analysis is non-linear essentially, so more concentration pointsshould be used to construct the calibration curve than the chemical analysis. The accuracy of the results is the key factor and if repetitive detection can improve the accuracy, the same procedures should be taken in the method validation and the unknown sample detection.3. Methodology Quality ControlThe unknown samples are detected only after the method validation for analysis of biological samples has been completed. The quality control should be carried out during the concentration detection of the biological samples in order to ensure the reliability of the method in the practical application. It is recommended to assess the method by preparing quality-control samples of different concentrations by isolated individuals.Each unknown sample is usually detected for only one time and redetected if necessary. In the bioequivalence experiments, biological samples from the same individual had better to be detected in the same batch. The new calibration curve should be established when detecting biological samples of each analysis batch and high, middle and low concentrations of the quality-control samples should be detected at the same time. Each concentration should at least have two samples and should be equally distributed in the detection sequence of the unknown samples. When there are a large number of unknown samples in one analysis batch, the number of the quality-control samples at different concentrations should be increased to make the quality-control samples exceed 5% of the unknown sample population. The warp of detection result from the quality-control samples should usually be less than 15%, while the warp of the low concentration point should be less than 20% and at most 1/3 results of the quality-control samples at different concentrations are allowed to exceed the limit. If the detection results of the quality-control samples do not accord with the above requirements, the detection results of the samples in this analysis batch should be blanked out.The samples with concentrations higher than the upper quantitation limit should be detected once more using corresponding diluted blank medium. As for those samples with concentrations lower than the lower quantitation limit, during pharmacokinetics analysis, those sampled before reaching C max should be calculated as zero while those after C max should be calculated as ND (Not detectable), so as to decrease the effect of the zero value on the AUG calculation.(Ⅱ) Design and Conduct of Studies1. Cross-over DesignCurrently, the crossover design is the most wildly applied method in the BE study. As for the drug absorption and clearance, there is a transparent variation among individuals. Therefore, the coefficient of variability among individuals is far greater than that of the individual himself. That is why the bioequivalence study is generally required to be designed on the principle of self crossover control. Subjects are randomly divided into several groups and treated in sequence, of whichsubjects in one group take the test products first and then the reference product, while subjects in the other take the reference products first and then the test products. A long enough interval is essential between the two sequences, which is called Wash-out period. In this way, every subject has been treated twice or more times sequentially, which is equal to self-control. Therefore, the influence of drug products on drug absorption can be discriminated from the others, and the effect of various test periods and individual difference on the results can be eliminated.Two-sequence crossover design, three-sequence crossover design are adopted respectively according to the amount of the test product. If two varieties of drug products are to be compared, the two-treatment, two-period or two-sequence crossover design will be a preferable choice. When three varieties of products (two test products and one reference product) are included, thethree-formulation, three-period and double 3×3 Latin square design will be the suitable choice. And a long enough wash-out period is required among respective periods.Wash-out period is set on purpose to eliminate the mutual disturbance of the two varieties of drug products and avoid the treatment in the prior period from affecting that of the next period. And the wash-out period is generally longer than or equal to 7 elimination half lives.While the half-lives of some drugs or their active metabolites are too long, it is not suitable to apply the crossover design. Under this circumstance, parallel design is adopted, but the sample size should be enlarged.However, as for some highly variable drugs, except for increase of the subjects, repetitive cross-over design can be applied, to test possibly existing difference in individual when receive the same preparation twice.2. Selection of Subjects2.1 Inclusion Criteria of Subjects:The difference among individuals of the subjects should be minimized so that the difference of the drug products can be detected. The inclusion criteria and exclusion criteria should be noted in the trial scheme.Male healthy subjects are recruited generally. And as to the drugs of special purpose, proper subjects are recruited according to specific conditions. If female healthy subjects are recruited, the possibility of gestation should be avoided. If the drugs to be tested have some known adverse effects, which may do harm to the subjects, patients can also be included as the subjects.Age: 18~ 40 years old generally. The difference in age of the subjects in one batch should not be more than 10 years.Body weight: not less than 50kg as to normal subjects. Body Mass Index (BMI), which is equal to body weight (kg)/ body height 2 (m2), is generally required to be in the range of standard body weight. For the subjects in one batch, the taken dosage is the same, the range of the bodyweight, therefore, should not have great disparity.The subjects should receive the overall physical examination and be proved healthy. There is not medical history of heart, kidney, digestive tract, nervous system, mental anomaly, metabolism dysfunction, and so on. The physical examination has revealed normal blood pressure, heart rate, electrocardiogram, and respiratory rate. Laboratory data have revealed normal hepatic function, renal function and blood function. Those examinations are essential to prevent the metabolism of drugs in vivo from being interfered by the diseases. According to the classification and safety of drugs, special items examinations are required before, during and after the test, such as the blood glucose examination, which is required in the drug trial of hypoglyceimic agents.In order to avoid the interference by other drugs, no administration of other drugs is allowed from two weeks before and till the end of the test. Moreover, the cigarette, wine,beverage with caffeine, or some fruit juice that may affect the metabolism of the drug, is forbidden during the trial period also. The subjects had better have no appetite of cigarette and wine. Possible effects of the cigarette-addicted history should not be neglected in the discussion of results.Due to the metabolism variance resulted by known genetic polymorphism of drugs, the safety factor which may be effected by the slow metabolism speed of drugs should be considered.2.2 Cases of SubjectsThe cases of the subjects should meet the statistic requirement. And according to the current statistical methods, 18~24 cases are enough for most drugs to meet the requirement of sample size. But as to some drugs of high variability, more cases may be required correspondingly.The cases of a clinic trial are determined by three fundamental factors: (1)Significance level: namely, the value of α, for which value 0.05 or 5% is often adopted;(2)Power of a test: namely, the value of 1-β. β is the index that represents the probability of the type error, which is also theⅡprobability of misjudging the actually efficacy drugs as inefficient drugs, and value not less than 80% is commonly stated; (3)Coefficient of variance(CV%)and Difference(θ): In the equivalence test of two drugs, the greater CV% and θ of the test indexes are, the more cases are required. The CV% and θ are unknown before the trial and can only be estimated by the above parameters of the owned reference products or running the preliminary test. Moreover, when a BA test has been finished, the value of N can be calculated according to the parameters such as θ, CV% and 1-β and then compared with the cases adopted in the finished BA test to determine whether the cases are reasonable or not.2.3 Division into Groups of the SubjectsThe subjects should be randomly divided into different comparable groups. The cases of the two groups should guarantee the best comparability.3. Test and Reference Product, T and RThe quality of the reference products directly affect the results reliability of BE trial. Generally, the domestic innovator products of the same dosage form which has been approval to be on sale are commonly selected. If it failed in acquiring the innovator products, the key product on the market can also be chosen as the reference product and the related quality certifications (such as the test results of the assay and dissolution) and the reasons for option should be provided. When it comes to the drug study of specific purpose, other on-sale dosage forms which are of the same kind and similar with pharmaceutics properties are selected as the reference products and those reference products should be already on sale and qualified in quality. The difference in assay between the test product and reference product should not exceed 5%.The test product should be the scale-up product or manufacture scale product, which is consistent with the quality standards for clinical application. And the indexes such as the in vitro dissolution, stability, content or valence assay, consistency reports between batches should be provided to the test unit for reference. As for some drugs, the data of polymorphs and optical isomers should be offered additionally. The test and reference product should be noted with the advanced development unit, batch number, specification, storage conditions and expiry date.For future reference, the test and reference product should be kept long enough after the trialtill the product is approved to be on sale.4. SamplingThere is a significant sense in designing the sampling point to guarantee both the reliability of the trial results and the rationality of calculating the pharmacokinetics parameters. Commonly, there should be preliminary tests or the pharmacokinetics literatures at home and abroad served as the evidences of designing the reasonable sampling points. When the blood-drug concentration assay is performed, the absorption phase, balance phase and clearance phase should be considered overall. There must be enough sampling points in every phase of the C-T curve and around the T max. The concentration curve, therefore, can fully reflect the entire procedure of the drugs distribution in vivo. And the blank blood samples are taken before the administration. Then at least 2~3 points are sampled in the absorption phase, at least 3points are sampled near the C max and 3-5 points in the clearance phase. Try to avoid that the first point gets the C max, and running the preliminary test may avoid this. When the continuously-sampling results show that the drugs’ primary forms or the active metabolites are at the point of 3~5 half- lives or the blood drug concentration is 1/10~1/20 ofC max, the values of AUC0-t/AUC0-∞are generally bigger than 80% .For the terminal clearance item doesn’t affect the evaluation of the products’ absorption process much, as to the long half-life drugs, the sampling periods should be continued long enough, so that the whole absorption process can be compared and analyzed. In the multiple administration study, the BA of some drugs is known to beaffected by the circadian rhythm, samples of which should be taken 24 hours continuously if possible.When the BA of the test drugs can’t be determined by detecting the blood-drug concentration, if the primary forms and the active metabolites of the test drugs are mainly be excreted in urine (more than 70% of the dosage), the BA assay may be performed by detecting the urine drug concentration, which is the test of the accumulated excretion quantity of drugs in urine to reflect the intake of drugs. The test products and trial scheme should accord with the demands of BA assay. The urine samples should be collected at intervals, and the collection frequency and intervals of which should meet the demands of evaluating the excretion degree of the primary forms and the active metabolites of the test products in urine. However this method cannot reflect the absorption speed of the drugs and gets many error factors, it is not recommended generally.Some drugs metabolize so rapidly in vivo that it is impossible to detect the primary forms in biological samples. Under these circumstances, the method determining the concentration of corresponding active metabolites in biological samples is adopted to perform the BA and BE studies.(Ⅲ) Result EvaluationAt present, the weighting function of AUC on drug absorption degree is comparatively affirmed, while C max and T max sometimes are not sensitive and seemly enough for weighting the absorption speed due to their dependence on the arrangement of sampling time, and they are therefore not suitable for drug products with multi-peak phenomena and for experiments with large individual variation. During the evaluation, if there are some special instances of inequivalence, a specific analysis should be performed for specific problems.As for AUC,the 90% confidence interval is generally required within the scope of 80%~125%. As for the drugs with narrow treatment spectrum, the above scope should likely be appropriately reduced. While in a few instances, having been validated to be reasonable, the scope can also be increased. So does C max. And as for T max, statistical evaluation is required only when its release speed is closely correlated to clinical therapeutic effects and safety, the equivalence scope of which can be ascertained according to the clinical requirements.When bioavailability ratio of test products is higher than that of reference products, which is called suprabioavailability, the following two instances can be considered: 1). Whether the reference product itself is a product with low bioavailability, which results in the improvement of the test drug's bioavailability; 2). The quality of the reference product meets the requirement, and the test drug really has higher bioavailability.(Ⅳ) Submission of the Contents of Clinical Study ReportsIn order to satisfy the demand of evaluation, a clinical report of bioequivalence study shouldinclude the following contents: (1)Experiment subjective;(2) Establishment of analysis methods for bioavailability samples and data of inspection, as well as provision of the essential chromatograms;(3) Detailed experiment design and operation methods , including data of all the subjects,sample cases,reference products,given dosage,usage and arrangement of sampling time;(4) All data about original measurement of unknown sample concentrations,pharmacokinetics parameters and drug-time curve of each subjects;(5) Data handling procedure and statistical analysis methods as well as detailed procedure and results of statistics;(6) Observation results of clinical adverse reactions after taking medicine,midway exit and out of record of subjects and the reasons;(7) Result analysis and necessary discussion on bioavailability or bioequivalence; (8) References. A brief abstract is required before the main body; at the end of the main body, names of the experiment unit, chief persons of the study and experiment personnel should be signed to take the responsibility for the results of the study.。
OOP-10.Polymorphism
10-7/63
Strategy in Structured Programming
Callback
Array start() moveForth() ptr = 0 ptr++
Linked List ptr = head ptr = ptr->next
Sequential File rewind() read(val)
getElement()
table[ptr]
ptr->item
ptr == null
val
eof()
isExhausted() ptr == size
10-8/63
Programmer's Choice, Explicitly !
•
A program lack of representation independence
Determine the method body associated with
table.search(...) at compile-time.
•
Commonly used in traditional languages such as
C, Pascal, Algol 68, etc.
10-13/63
•
How to Achieve Polymorphism ? Data: Implicit Type Conversion + Explicit Type Conversion Operations: Overloading + Dynamic Binding
10-4/63
Representation Independence
• •
14---面向对象方法下的信息隐藏
覆盖 -- 替换父类的实现
• 父类和子类行为的差异性
– 子类不需要父类的方法的实现想替换的时候可 以利用覆盖机制来实现。
• 反射与动态代理
– Java中覆盖的实现其实是运用到Java的反射的 原理,结合动态代理来实现“实时多态”。
– 数据职责 – 行为职责
• 职责的分配
第第七七页页,,编编辑辑于于星星期期五六::二十十六三点点四五十分九。分。
数据和行为的保护
• 可见性
– Public – Private – Protected – 缺省
第第八八页页,编,辑编于辑星期于五星:期二十六三:点十五分六。点 四十九分。
面向对象理念
第第九九页,页编,辑于编星期辑五于:二星十期三点六五分:。十六点 四十九分。
第第十十页页,编,辑编于辑星期于五星:期二十六三:点十五分六。点 四十九分。
接口 – 协议
• 接口一般描述以下几个内容:
– 对象之间交互的消息(方法名) – 消息中的参数 – 消息返回结果的类型 – 与状态无关的不变量 – 需要处理的异常
第第十十七七页,页编,辑编于星辑期于五星:二期十六三:点 五十分六。点 四十九分。
设计 一
第第十十八八页页,,编编辑辑于于星星期期五六::二十十六三点点四五十分九。分。
问题
第第十十九页九,页编辑,于星编期辑五:于二星十三期点五六分:。 十六点 四十九分。
设计 二
第第二二十十页页,,编编辑辑于于星星期期五六::二十十六三点点四五十分九。分。
旧维持了完全静态的类型安全。在各个语言中实现 都各不相同。C++是通过模板(Template)机制,而 Java则是通过泛化(Generics)机制。
美国国家BIM标准(NBIMS)第一版_(一)
ForewordNational Building Information Modeling Standard™©2007 National Institute of Building Sciences. All rights reserved .ForewordThe construction industry is in the middle of a growing crisis worldwide. With 40% of the world’s raw materials being consumed by buildings, the industry is a key player in global economics and politics. And, since facilities consume 40% of the world’s energy and 65.2% of total U.S.electrical consumption, the construction industry is a key player in energy conservation, too! With facilities contributing 40% of the carbon emissions to the atmosphere and 20% of material waste to landfills, the industry is a key player in the environmental equation. Clearly, the construction industry has a responsibility to use the earth’s resources as efficiently as possible.Construction spending in the United States is estimated to be $1.288 trillion for 2008. The Construction Industry Institute estimates there is up to 57% non-value added effort or waste in our current business models. This means the industry may waste over $600 billion each year.There is an urgent need for construction industry stakeholders to maximize the portion of services that add value in end-products and to reduce waste.Another looming national crisis is the inability to provide enough qualified engineers. Someestimate the United States will be short a million engineers by the year 2020. In 2007, the United States was no longer the world’s largest consumer, a condition that will force United States industry to be more competitive in attracting talented professionals. The United States construction industry must take immediate action to become more competitive.The current approach to industry transformation is largely focused in efforts to optimize design and construction phase activities. While there is much to do in those phases, a lifecycle view is required. When sustainability is not adequately incorporated, the waste associated with current design, engineering, and construction practices grows throughout the rest of the facility’s lifecycle. Products with a short life add to performance failures, waste, recycling costs, energyconsumption, and environmental damage. Through cascading effects, these problems negatively affect the economy and national security due to dependence on foreign petroleum, a negative balance of trade, and environmental degradation. To halt current decline and reverse existing effects, the industry has a responsibility to take immediate action.While only a very small portion of facility lifecycle costs occur during design and construction, those are the phases where our decisions have the greatest impact. Most of the costs associated with a facility throughout its lifecycle accrue during a facility’s operations and sustainment. Carnegie-Mellon University research has indicated that an improvement of just 3.8% in productivity in the functions that occur in a building would totally pay for the facility’s design, construction, operations and sustainment, through increased efficiency. Therefore, as industry focuses on creating, maintaining, and operating facilities more efficiently, simultaneous action is required to ensure that people and processes supported by facilities are optimized.BIM stands for new concepts and practices that are so greatly improved by innovative information technologies and business structures that they will dramatically reduce the multiple forms of waste and inefficiency in the building industry. Whether used to refer to a product – Building Information Model (a structured dataset describing a building), an activity – Building Information Modeling (the act of creating a Building Information Model), or a system – Building Information Management (business structures of work and communication that increase quality andefficiency), BIM is a critical element in reducing industry waste, adding value to industry products, decreasing environmental damage, and increasing the functional performance of occupants.ForewordNational Building Information Modeling Standard™©2007 National Institute of Building Sciences. All rights reserved .The National Building Information Model Standard™ (NBIMS) is a key element to building industry transformation. NBIMS establishes standard definitions for building information exchanges to support critical business contexts using standard semantics and ontologies. Implemented in software, the Standard will form the basis for the accurate and efficientcommunication and commerce that are needed by the building industry and essential to industry transformations. Among other benefits, the Standard will help all participants in facilities-related processes achieve more reliable outcomes from commercial agreements.Thus, there is a critical need to increase the efficiency of the construction process. Today’s inefficiency is a primary cause of non-value added effort, such as re-typing (often with a new set of errors) information at each phase or among participants during the lifecycle of a facility or failing to provide full and accurate information from designer to constructor. With the implementation of this Standard, information interoperability and reliability will improve significantly. Standard development has already begun and implementable results will beavailable soon. BIM development, education, implementation, adoption, and understanding are intended to form a continuous process ingrained evermore into the industry. Success, in the form of a new paradigm for the building construction industry, will require that individuals andorganizations step up to contribute to and participate in creating and implementing a commonBIM standard. Each of us has a responsibility to take action now.David A. Harris, FAIAPresidentNational Institute of Building SciencesTable of ContentsNational Building Information Modeling Standard™©2007 National Institute of Building Sciences. All rights reserved .ForewordTable of ContentsSection 1 – Introduction to the National Building InformationModeling Standard™ Version 1 - Part 1: Overview,Principles, and MethodologiesChapter 1.1 Executive SummaryChapter 1.2 How to Read Version 1 -Part 1 of the NBIMStandard Navigation guide for readers with varied interests, responsibilities, and experience with BIM.Section 2 – Prologue to the National BIM StandardChapter 2.1 BIM Overall Scope An expansive vision for building informationmodeling and related concepts.Chapter 2.2 Introduction to the National BIM Standard Committee The Committee’s vision and mission,organization model, relationships to otherstandards development organizations,philosophical position, and the Standardproduct.Chapter 2.3 Future Versions Identifies developments for upcoming versionsof the Standard including sequence ofdevelopments, priorities, and planned releasedates.Section 3 – Information Exchange ConceptsChapter 3.1 Introduction to ExchangeConcepts What is an information exchange? Theory and examples from familiar processes.Chapter 3.2 Data Models and the Role of Interoperability.High level description of how BIM informationwill be stored in operational and projectsettings. Compares and contrasts integrationand interoperability and the NBIM Standardrequirement for interoperability.Chapter 3.3 Storing and SharingInformation Description of conceptual need for a shared, coordinated repository for lifecycle information.Presents an approach to providing the sharedinformation for a BIM which can be used byinformation exchangesTable of ContentsNational Building Information Modeling Standard™©2007 National Institute of Building Sciences. All rights reserved .Chapter 3.4 Information Assurance Discusses means to control information inputand withdrawal from a shared BIM repository.Section 4 – Information Exchange ContentChapter 4.1 BIM MinimumDefines quantity and quality of information required for a defined BIM. Chapter 4.2 Capability Maturity Model Building on the BIM Minimum chapter, furtherdefines a BIM and informs planning to improvethe capability to produce a mature BIM.Section 5 – NBIM Standard Development ProcessChapter 5.1 Overview of ExchangeStandard Developmentand Use ProcessDiagrams and describes major components in NBIM Standard development process. Chapter 5.2 Workgroup Formationand RequirementsDefinition Introduces the concept of forums and domain interest groups forming around needed exchange definitions. Discusses theInformation Delivery Manual (IDM) process andtools for requirements definition activities.Chapter 5.3 User-Facing Exchange Models Covers the IDM requirements for IFC-independent data model views.Chapter 5.4 Vendor-Facing Model View Definition, Implementation and Certification Testing Explains Model View Definition (MVD)requirements for schema-specific modeldefinition and the NBIMS Committee’s role infacilitating implementation and certificationtesting.Chapter 5.5 Deployment Discusses Project Agreements and use ofGeneric BIM Guides associated with BIMauthoring (creating a BIM) using certifiedapplications, validating the BIM construction,validating data in the BIM model, and using theBIM model in certified products to accomplishproject tasks through interoperable exchanges.Chapter 5.6 Consensus-Based Approval MethodsDescribes various methods of creating,reviewing, and approving the NBIM StandardExchange Requirements, Model ViewDefinitions, Standard Methods, Tools, andReferences used by and produced by theNBIMS Committee.Table of ContentsNational Building Information Modeling Standard™©2007 National Institute of Building Sciences. All rights reserved .AcknowledgementsReferencesGlossaryAppendicesIntroduction to AppendicesAppendix A Industry Foundation Classes(IFC or ifc) IFC define the virtual representations of objects used in the capital facilitiesindustry, their attributes, and theirrelationships and inheritances.Appendix B CSI OmniClass ™OmniClass is a multi-table facetedclassification system designed for useby the capital facilities industry to aidsorting and retrieval of informationand establishing classifications forand relationships between objects ina building information model.Appendix C International Framework for Dictionaries (IFDLibrary ™)A schema requires a consistent set ofnames of things to be able to work.Each of these names must have acontrolled definition that describeswhat it means and the units in which itmay be expressed.Section 1 – Introduction to the National BIM Standard V 1 - Part 1Chapter 1.1National Building Information Modeling Standard™©2007 National Institute of Building Sciences. All rights reserved .Chapter 1.1 Executive SummaryNational Building Information Modeling Standard™ Version 1 - Part 1:Overview, Principles, and MethodologiesIntroductionThe National Building Information Modeling Standard (NBIMS) Committee is a committee of the National Institute of Building Sciences (NIBS) Facility Information Council (FIC). The vision for NBIMS is “an improved planning, design, construction, operation, and maintenance process using a standardized machine-readable information model for each facility, new or old, which contains all appropriate information created or gathered about that facility in a format useable by all throughout its lifecycle.”1 The organization, philosophies, policies, plans, and working methods that comprise the NBIMS Initiative and the products of the Committee will be the National BIM Standard (NBIM Standard), which includes classifications, guides, recommended practices, and specifications.This publication is the first in a series intended to communicate all aspects of the NBIMS Committee and planned Standard, which will include principles, scope of investigation,organization, operations, development methodologies, and planned products. NBIMS V1-P1 is a guidance document that will be followed by publications containing standard specifications adopted through a consensus process .Wherever possible, international standards development processes and products, especially the NIBS consensus process, American Society for Testing and Materials (ASTM), AmericanNational Standards Institute (ANSI), and International Standards Organization (ISO) efforts will be recognized and incorporated so that NBIMS processes and products can be recognized as part of a unified international solution. Industry organizations working on open standards, such as the International Alliance for Interoperability (IAI), the Open Geospatial Consortium (OGC), and the Open Standards Consortium for Real Estate (OSCRE), have signed the NBIMS Charter inacknowledgement of the shared interests and commitment to creation and dissemination of open, integrated, and internationally recognized standards. Nomenclature specific to North American business practices will be used in the U.S. NBIMS Initiative. Consultations with organizations in other countries have indicated that the U.S.-developed NBIM Standard, once it is localized, will be useful internationally as well. Continued internationalization is considered essential to growth of the U.S. and international building construction industries.BIM Overall Scope and DescriptionBuilding Information Modeling (BIM) has become a valuable tool in some sectors of the capital facilities industry. However in current usage, BIM technologies tend to be applied within vertically integrated business functions rather than horizontally across an entire facility lifecycle. Although the term BIM is routinely used within the context of vertically integrated applications, the NBIMS Committee has chosen to continue using this familiar term while evolving the definition and usage to represent horizontally integrated building information that is gathered and applied throughout the entire facility lifecycle, preserved and interchanged efficiently using open and interoperable technology for business, functional and physical modeling, and process support and operations. 1 Charter for the National Building Information Modeling (BIM) Standard, December 15, 2005, pg.1. See /bim/pdfs/NBIMS_Charter.pdf .Section 1 – Introduction to the National BIM Standard V 1 - Part 1Chapter 1.1National Building Information Modeling Standard™©2007 National Institute of Building Sciences. All rights reserved .NBIM Standard Scope and DescriptionThe NBIMS Initiative recognizes that a BIM requires a disciplined and transparent data structure supporting all of the following.x A specific business case that includes an exchange of building information. x The users’ view of data necessary to support the business case. x The machine interpretable exchange mechanism (software) for the required information interchange and validation of results.This combination of content selected to support user needs and described to support open computer exchange form the basis of information exchanges in the NBIM Standard. All levels must be coordinated for interoperability, which is the focus of the NBIMS Initiative. Therefore, the primary drivers for defining requirements for the National BIM Standard are industry standard processes and associated information exchange requirements.In addition, even as the NBIM Standard is focused on open and interoperable informationexchanges, the NBIMS Initiative addresses all related business functioning aspects of the facility lifecycle. NBIMS is chartered as a partner and an enabler for all organizations engaged in the exchange of information throughout the facility lifecycle.Data Modeling for BuildingsKey to the success of a building information model is its ability to encapsulate, organize, and relate information for both user and machine-readable approaches. These relationships must be at the detail level, relating, for example, a door to its frame or even a nut to a bolt, whilemaintaining relationships from a detailed level to a world view. When working with as large a universe of materials as exists in the built environment, there are many traditional verticalintegration points (or stovepipes) that must be crossed and many different languages that must be understood and related. Architects, engineers, as well as the real estate appraiser or insurer must be able to speak the same language and refer to items in the same terms as the first responder in an emergency situation. Expand this to the world view where systems must be interoperable in multiple languages in order to support the multinational corporation. Over time ontologies will be the vehicles that allow cross communication to occur. In order to standardize these many options, organizations need to be represented and solicited for input. There are several, assumed to be basic, approaches in place that must come together in order to ensure that a viable and comprehensive end-product will be produced.The Role of InteroperabilitySoftware interoperability is seamless data exchange at the software level among diverseapplications, each of which may have its own internal data structure. Interoperability is achieved by mapping parts of each participating application’s internal data structure to a universal data model and vice versa. If the employed universal data model is open, any application canparticipate in the mapping process and thus become interoperable with any other application that also participated in the mapping. Interoperability eliminates the costly practice of integrating every application (and version) with every other application (and version).The NBIM Standard maintains that viable software interoperability in the capital facilities industry requires the acceptance of an open data model of facilities and an interface to that data model for each participating application. If the data model is industry-wide (i.e. represents the entire facility lifecycle), it provides the opportunity to each industry software application to become interoperable.Section 1 – Introduction to the National BIM Standard V 1 - Part 1Chapter 1.1National Building Information Modeling Standard™©2007 National Institute of Building Sciences. All rights reserved .Storing and Sharing InformationOne of the innovations, demonstrated by some full-service design and engineering firms and several International Alliance for Interoperability (IAI) demonstration projects, has been the use of a shared repository of building information data. A repository may be created by centralizing the BIM database or by defining the rules through which specific components of BIM models may be shared to create a decentralized shared model. As BIM technology and use matures, thecreation of repositories of project, organization, and/or owner BIM data will have an impact on the framework under which NBIMS operates. Owners are likely to create internally as-built and as-maintained building model repositories, which will be populated with new and updated information supplied via design/construction projects, significant renovations, and routine maintenance and operations systems.Information AssuranceThe authors caution that, while a central (physical or virtually aggregated) repository of information is good for designing, constructing, operating, and sustaining a facility, and therepository may create opportunities for improved efficiency, data aggregation may be a significant source of risk.Managing the risks of data aggregation requires advanced planning about how best to control the discovery, search, publication, and procurement of shared information about buildings and facilities. In general, this is addressed in the data processing industry through digital rights management. Digital rights management ensures that the quality of the information is protected from creation through sharing and use, that only properly authorized users are granted access, and only to that subset of information to which they should have access. There is a need toensure that the requirements for information are defined and understood before BIMs are built, so that facility information receives the same protection that is commonplace in world-wide personnel and banking systems.Minimum BIM and the Capability Maturity ModelThe NBIM Standard Version 1 - Part 1 defines a minimum standard for traditional vertical construction, such as office buildings. It is assumed that developing information exchange standards will grow from this minimum requirement.The Standard also proposes a Capability Maturity Model (CMM) for use in measuring the degree to which a building information model implements a mature BIM Standard. The CMM scores a complete range of opportunity for BIMs, extending from a point below which one could say the data set being considered is not a BIM to a fully realized open and interoperable lifecycle BIM resource.The U.S. Army Corps of Engineers BIM Roadmap 2 is presented as a useful reference for building owners seeking guidance on identifying specific data to include in a BIM from a design or construction perspective.2 See https:///default.aspx?p=s&t=19&i=1 for the complete roadmap.Section 1 – Introduction to the National BIM Standard V 1 - Part 1Chapter 1.1National Building Information Modeling Standard™©2007 National Institute of Building Sciences. All rights reserved .NBIM Standard Process DefinitionProposals for the processes the NBIMS Committee will employ to produce the NBIM Standard and to facilitate productive use are discussed. A conceptual diagram to orient the user is provided. Components of this diagram correspond to section 5 chapters.Both the process used to create the NBIM Standard and the products are meant to be open and transparent. The NBIMS Committee will employ consensus-based processes to promote industry-wide understanding and acceptance. Additionally, the Committee will facilitate the process whereby software developers will implement standard exchange definitions and implementations tested for compliance. Finally, the NBIMS Committee will facilitate industry adoption and beneficial use through guides, educational activities, and facilitation of testing by end users of delivered BIMs.The Information Exchange Template, BIM Exchange Database, the Information Delivery Manual (IDM), and Model View Definition (MVD) activities together comprise core components of the NBIM Standard production and use process. The Information Exchange Template and BIM Exchange Database are envisioned as web-based tools to provide search, discovery, and selection of defined exchanges as well as a method of providing initial information necessary to propose and begin a new exchange definition discussion. The NBIMS workgroup formation phase teams will use the IDM, adapted from international practices, to facilitate identification and documentation of information exchange processes and requirements. IDM is the user-facing phase of NBIMS exchange standard development with results typically expressed in human-readable form. MVD is the software developer-facing phase of exchange standard development. MVD is conceptually the process which integrates Exchange Requirements (ERs) coming from many IDM processes to the most logical Model Views that will be supported by softwareapplications. Implementation-specific guidance will specify structure and format for data to be exchanged using a specific version of the Industry Foundation Classes (IFC or ifc) specification. The resulting generic and implementation-specific documentation will be published as MVDs, as defined by the Finnish Virtual Building Environment (VBE) project,3 the Building Lifecycle Interoperability Consortium (BLIS),4 and the International Alliance for Interoperability (IAI).5 The Committee will work with software vendors and the testing task team members to plan and facilitate implementation, testing, and use in pilot projects. After the pilot phase is complete, the Committee will update the MVD documents for use in the consensus process and ongoing commercial implementation. Finally, after consensus is reached, MVD specifications will be incorporated in the next NBIMS release.NBIMS AppendicesReference standards in the NBIM Standard provide the underlying computer-independent definitions of those entities, properties, relationships, and categorizations critical to express the rich language of the building industry. The reference standards selected by the NBIMSCommittee are international standards that have reached a critical mass in terms of capability to share the contents of complex design and construction projects. NBIMS V1-P1 includes three candidate reference standards as Appendix documents: IAI Industry Foundation Classes (IFC or ifc), Construction Specifications Institute (CSI) OmniClass ™, and CSI IFDLibrary ™.3http://cic.vtt.fi/projects/vbe-net/4 5Section 1 – Introduction to the National BIM Standard V 1 - Part 1Chapter 1.1National Building Information Modeling Standard™©2007 National Institute of Building Sciences. All rights reserved .The IFC data model consists of definitions, rules, and protocols that uniquely define data sets which describe capital facilities throughout their lifecycles. These definitions allow industrysoftware developers to write IFC interfaces to their software that enable exchange and sharing of the same data in the same format with other software applications, regardless of the internal data structure of the individual software application. Software applications that have IFC interfaces are able to exchange and share data with other application that also have IFC interfaces.The OmniClass ™ Construction Classification System (OmniClass or OCCS) is a multi-tableclassification system designed for use by the capital facilities industry. OmniClass includes some of the most commonly used taxonomies in the capital facilities industry. It is applicable for organizing many different forms of information important to the NBIM Standard, both electronic and hard copy. OCCS can be used in the preparation of many types of project information and for communicating exchange information, cost information, specification information, and other information that is generated throughout the facility’s lifecycle.IFDLibrary ™ is a kind of dictionary of construction industry terms that must be used consistently in multiple languages to achieve consistent results. Design of NBIMS relies on terminology and classification agreement (through OmniClass ) to support model interoperation. Entries in the OmniClass tables can be explicitly defined in the IFDLibrary once and reused repeatedly,enabling reliable automated communications between applications – a primary goal of NBIMS. ReferencesNBIMS References in this document represent the work of many groups working in parallel to define BIM implementation for their areas of responsibility. Currently there are four types of references.x Business Process Roadmaps are documents that provide the business relationships of the various activities of the real property industry. These will be the basis for organizing the business processes and will likely be further detailed and coordinated over time. The roadmaps will help organize NBIMS and the procedures defined in the InformationDelivery Manuals (IDMs).x Candidate Standards are documents that are candidates to go through the NBIMS consensus process for acceptance as part of future NBIMS. It is envisioned that Part 2 or later releases of the Standard will incorporate these documents once approved.x Guidelines have been developed by several organizations and include items that should be considered for inclusion in NBIMS. Since NBIMS has not existed prior to this, there was no standard from which to work, resulting in a type of chicken-or-egg dilemma.When formal NBIMS exists there will need to be some harmonization, not only between the guidelines and NBIMS, but also in relating the various guidelines to each other.While guidelines are not actually a part of NBIMS, they are closely related and therefore included as references.xOther Key References are to parallel efforts being developed in concert with NBIMS. Not part of NBIMS, they may, in fact, be standards in their own right.。
5[1].1What_have_we_learned_about_generic_competitive_strategy
Strategic Management JournalStrat.Mgmt.J.,21:127–154(2000)WHAT HAVE WE LEARNED ABOUT GENERIC COMPETITIVE STRATEGY?A META-ANALYSISCOLIN CAMPBELL-HUNT*School of Business and Public Management,Victoria University of Wellington,Wellington,New ZealandThe dominant paradigm of competitive strategy is now nearly two decades old,but it has proved difficult to assess its adequacy as a descriptive system,or progress its propositions about the performance consequences of different strategic designs.It is argued that this is due to an inability to compare and cumulate empirical work in the field.A meta-analytic procedure is proposed by which the empirical record can be aggregated.Results suggest that,although cost and differentiation do act as high-level discriminators of competitive strategy designs,the paradigm’s descriptions of competitive strategy should be enhanced,and that its theoretical proposition on the performance of designs has yet to be supported.A considerable agenda for further work suggests that competitive strategy research should recover something of its former salience.Copyright ©2000John Wiley &Sons,Ltd.INTRODUCTIONMichael Porter’s theory of generic competitive strategy is unquestionably among the most sub-stantial and influential contributions that have been made to the study of strategic behavior in organizations (Porter,1980,1985).In essence,the theory contains two elements:first,a scheme for describing firms’competitive strategies according to their market scope (focused or broad),and their source of competitive advantage (cost or differentiation);and,second,a theoretical proposition about the performance outcomes of these strategic designs:that failure to choose between one of cost-or differentiation-leadership will result in inferior performance,the so-called ‘stuck-in-the-middle’hypothesis.Within a few years of publication,the theoryKey words:generic strategy;competitive strategy;meta-analysis*Correspondence to:Colin Campbell-Hunt,School of Busi-ness and Public Management,Victoria University of Welling-ton,PO Box 600,Wellington,New ZealandCCC 0143–2095/2000/020127–28$17.50Received 5August 1996Copyright ©2000John Wiley &Sons,Ltd.Final revision received 30July 1999was recognized as the dominant paradigm of competitive strategy (Hill,1988;Murray,1988).But,despite widespread interest and application,it has proved difficult to progress its represen-tation of competitive behavior.In Kuhn’s account,a paradigm gives a common platform and focus to subsequent empirical and theoretical investi-gation;it defines the scope of phenomena that are deemed to be important,and the methods used for investigation;and it becomes the received wisdom that is taught in the subject’s textbooks (Kuhn,1962).In the following para-graphs it will be shown that Porter’s theory has played all these roles.But it is the thesis of this paper that the paradigm has so far failed to open up a period of Kuhnian ‘normal science,’in which a detailed and immensely productive dialogue is established between fact and theory.Failure to establish this dialogue threatens to leave the study of competi-tive strategy in a preparadigm state,as no more than a series of brave beginnings,none of which attract sufficient empirical or social support to make the phase transition to normal science.The128 C.Campbell-Huntimpediment has been that there is no known way to compare or cumulate individual empirical studies of the type suggested by the paradigm.It is the objective of this paper to remove this impediment.The dominant paradigmThe widespread acceptance of Porter’s descriptive scheme by researchers can be seen in the wide range of its application.These include industries as diverse as shipping(Brooks,1993),banking (Meidan and Chin,1995),and hospital services (Kropf and Szafran,1988);and countries as diverse as Ireland(McNamee and McHugh, 1989),Portugal(Green,Lisboa,and Yasin,1993), Korea(Kim and Lim,1988),and the People’s Republic of China(Liff,He,and Steward,1993). The scheme has also been widely used by researchers studying relationships betweenfirms’competitive strategy and other aspects of man-agement:i.e.,their human relations strategy (Schuler and Jackson,1989);information tech-nology(Huff,1988);industrial engineering (Petersen,1992);manufacturing strategy(Kotha and Orne,1989);logistics(McGinnis and Kohn, 1988);environmental scanning(Jennings and Lumpkin,1992);planning processes(Powell, 1994);management selection(Govindarajan, 1989;Sheibar,1986);and managerial biases in perceptions of competitive strategy(Nystrom, 1994).The framework has also been used exten-sively in practice to structure managers’percep-tions about theirfirm’s strategy.With few excep-tions(Bowman and Johnson,1992),such applications are rarely reported.The paradigm’s theoretical propositions have also attracted intense debate.Early challenges to the‘stuck-in-the-middle’hypothesis(Karnani, 1984;Murray,1988;Hill,1988)argued that con-ditions which might favor cost-leadership(such as the reduction of transaction costs through vertical integration,process innovation and learning,and scale effects)were independent of conditions that might favor differentiation(such as consumer preferences,product innovation,and quality dif-ferentiation based on afirm’s superiority in a particularly complex value system).Hence,exter-nal conditions provide no a priori reason to discriminate against mixed cost-and differen-tiation-strategic designs(Murray,1988).More-over,in conditions where differentiation strategies can be used to expand market share,and this in turn permits greater capture of economies of scale and scope,external conditions might actively favor mixed strategies(Hill,1988;Phillips, Chang,and Buzzell,1983).Conditions that have been considered in this way include the particular nature of retailing as against manufacturing indus-tries(Cappel et al.,1994);and the distinctive characteristics of an industry’s technology (Oskarsson and Sjoberg,1994).Beginning with Hambrick(1983),a series of studies has also begun the task of exploring the paradigm’s empirical validity.These have followed the paradigm’s guidance to describe ge-neric strategies as polythetic gestalts or designs (Miller,1981;Hambrick,1984;Rich,1992),a task best undertaken using principal components analysis and cluster analysis(Hambrick,1984; Harrigan,1985;McGee and Thomas,1986). However,these techniques result in classifications that are specific to the sample of participating firms and cannot be cumulated with otherfind-ings.Thus,it has not been possible to assess the accumulated weight of evidence on what generic competitive strategies look like in practice,nor how closely they accord with the paradigm’s descriptive and theoretical elements.The study of competitive strategy is thus currently stuck in something of a dead-end of its own design. Compounding these difficulties,there have evolved a number of different interpretations of the dominant paradigm’s descriptive system,so that the paradigm’s descriptive and theoretical propositions may take a number of forms.To date,these have not been systematically com-pared.As a result of this impeded dialogue between paradigm and empirical investigation,the para-digm’s scheme for describing competitive strategy has barely progressed in the two decades since it wasfirst proposed.Attempts by Miller(1986) and Mintzberg(1988)to widen the set of stra-tegic competitive behaviors that are held to be ‘generic’have met with little success,despite recent empirical evidence which suggests that they offer a superior description of competitive behavior(Kotha and Vadlamani,1995).Porter’s scheme remains unaltered as the typology set out in most contemporary textbooks(Thompson and Strickland,1995;Pearce and Robinson,1994; Bourgeois,1996).The study reported in this paper was accord-Generic Competitive Strategy129ingly motivated to develop meta-analytic pro-cedures with which to aggregate empirically derived descriptions of generic competitive strat-egy.Study One reports a meta-analysis of the principal component solutions in the empirical record;Study Two reports a meta-analysis of clustered categories of competitive strategy design.The resulting aggregates are compared to alternative interpretations of the classification system of the dominant paradigm.Study Three uses these aggregate descriptions to assess the paradigm’s theoretical propositions on the per-formance of generic competitive strategies.To begin,alternative interpretations of the dominant paradigm and its propositions are discussed and formalized.INTERPRETATIONS OF THE DOMINANT PARADIGMDescribing competitive strategyAll theory building requires a parsimonious way to describe the intractable variety of nature.This section examines the four approaches that have been used to interpret the dominant paradigm’s descriptive system.The taxonomic interpretationThefirst approach is to interpret the system as a taxonomy,that is,a hierarchically ordered set of classifications,within which all designs can be allocated to a unique position,depending on the particular set of strategic elements involved (Chrisman,Hofer,and Boulton,1988).In this approach,the bewildering variety of strategic designs is reduced to a parsimonious set of allo-cation‘rules’(Doty and Glick,1994)by which a specific design for competitive strategy is classi-fied within the hierarchy.This interpretation, clearly inspired by biological taxonomy,requires that allocation rules have a hierarchical structure, and that classifications be internally homo-geneous,mutually exclusive,and collectively exhaustive(Chrisman et al.,1988;Rich,1992). The order in which allocation rules enter into the hierarchy is shown in Figure1.At the top is the paradigm’s distinction between designs that place distinctive emphasis,relative to competitors, on pursuing some source of advantage,and designs that spread their efforts more evenly and become stuck-in-the-middle.The paradigm’s theory of performance is based on this highest-level distinction.Within the class of distinctive emphasis designs,Porter’s emphasis on the cost/differentiation dichotomy as‘two basic types of competitive advantage,’and as‘fundamentally different route(s)to competitive advantage’(Porter,1985:11),suggests that this allocation rule be placed above market scope in the rule hierarchy.The life-science-inspired,taxonomic inter-pretation also places particular emphasis on the mutual exclusion of class memberships.An essen-tialist rationale for the sharp distinction between cost-and differentiation-emphasis would be that there are elements in the design of each that naturally repel the other.Each design has its own fundamental‘essence,’and attempts to mix them will be quickly terminated by the unnatural nature of the experiment(Miller,1981;Hannan and Freeman,1989).Mixed-emphasis designs are not completely ruled out,but will be rare.These key features of the taxonomic inter-pretation are set out in Table1,and stressed in the following proposition:Proposition1a:All competitive-strategy designs can be precisely allocated to a number of hierarchically ordered classes on the basis of(i)whether or not a design has some dis-tinctive emphasis relative to competitors;(ii) whether that emphasis is towards cost-or differentiation-advantage;and(iii)the market scope adopted.Only a very small number of mixed-emphasis designs will exist.The empiricist interpretationThis second interpretation relaxes the restrictions of taxonomy.The approach is best typified in an extensive series of studies by Danny Miller, including studies of competitive strategy(Miller, 1992b;Miller and Friesen,1986a,1986b).The approach retains the assumption that the very large number offirm-level competitive-strategy designs can be reduced to a smaller number of classes(Miller,1981),but it differs from a taxo-nomic interpretation in four ways(Table1). First,it is no longer asserted that all designs can be so classified,just that a‘large proportion’can(Miller,1986:236).Room is left for idiosyn-130 C.Campbell-HuntFigure 1.The taxonomic description of generic competitive strategycratic designs to flourish around the more com-monly observed classes.Secondly,the allocation of each individual design to a class is no longer determined exactly by a precise set of allocation rules,but is in part stochastic.This uncertainty can be reduced with more refined classification,so that a balance must be struck between a larger number of more homogeneous classes,and a more parsimonious,but possibly less meaningful,classification (Doty and Glick,1994;Hambrick,1984;Miller,1981).Thirdly,although all empiri-cally derived clusters are associated together in hierarchies of similarity,an empiricist inter-pretation does not impose an ex ante requirement that cost and differentiation be high-level discrim-inators in that hierarchy.Finally,the empiricist interpretation does not anticipate a near-prohibition on mixed-emphasis designs,ex ante ,but rather allows whatever common designs exist to emerge from the data.This less restrictive interpretation of the domi-nant paradigm can be summarized as follows:Proposition1b:Mostcompetitive-strategyTable 1.Interpretations of the dominant paradigm Interpretation:Taxonomic Empiricist Nominalist DimensionalHierarchically ordered Yes No Yes No descriptions?Homogeneity of class Identical Approximate Variable n/a membersMutually exclusive Yes Approximate Approximate No classification?Mixed designs?Very few YesVery few YesCollectively exhaustive?YesLarge proportionNoLarge proportiondesigns can be meaningfully allocated to a number of classes on criteria that include whether or not a design has some distinctive emphasis relative to competitors;whether that emphasis is towards cost-or differentiation-advantage;and the market scope adopted.The nominalist interpretationIn this view,generic competitive strategies are taken to represent ideal ‘types,’and the 2×2classification system of the dominant paradigm is interpreted as a general typology (Doty and Glick,1994).Correspondence between real designs and ideal types will be both imperfect and variable (Mayr,1969;Rich,1992),so that classifications will be neither fully homogeneous nor mutually exclusive (Table 1).Also,the nominalist interpretation does not require the four ideal types to be collectively exhaustive.To the contrary,and unlike all other interpretations of the dominant paradigm,the approach seeks only to describe a limited numberGeneric Competitive Strategy131of ideal types based on a few aspects of competi-tive-strategy design,selected for their importance to the paradigm’s theory of performance.The nominalist approach is hierarchical in that the limited number of characteristics chosen to describe ideal types are held to be fundamentally important to the design and performance of com-petitive strategies,and to be the basis on which to distinguish more richly described designs(Bakke, 1959;Rich,1992;Porter,1980:40–41).All dif-ferentiation designs share the characteristic of pursuing a price premium;cost designs are ori-ented to economy as the path to profit.This essentialist distinction between ideal types is com-mon to both the nominalist and taxonomic interpretations and means that both expect the number of mixed designs to be small(Doty and Glick,1994).The nominalist interpretation of the dominant paradigm is accordingly formalized as follows: Proposition1c:Competitive-strategy designs can be likened to a greater or lesser extent to one of two fundamentally different archetypes: one emphasizing advantage from costs,the other from differentiation,each with broad and focused market scope variants.Only a very small number of mixed-emphasis designs will exist. Generics interpreted as dimensions of competitive-strategy designThe fourth approach interprets the characteristics of market scope,cost-,and differentiation-emphasis as independent dimensions of a multi-variate space encompassing most of the variation in competitive-strategy designs(Karnani,1984; Miller and Dess,1993).Distinctive features of this interpretation are summarized in Table1. Unlike all other interpretations of the dominant paradigm,the dimensional approach does not define classes of competitive-strategy designs,so that the question of class homogeneity does not arise.Rather,the approach is restricted to describ-ing the space in which classes may be defined. The distinction is essentially that drawn between two of Pepper’s‘world hypotheses’:formism, which describes the world in categories;and mechanism,which describes the world in elements and the relationships between them (Pepper,1942).Because all designs are positioned relative to both cost-and differentiation-dimensions,the presence of one emphasis does not exclude the other,and unrestricted scope is allowed to mixed-emphasis designs(Miller and Dess,1993;Parker and Helms,1992).Even the extreme archetypal designs of cost-and differentiation-emphasis can-not be adequately described in their own terms alone,but must be positioned relative to both parameters:cost leaders must not lose touch with the competitive standards of differentiation,and vice versa.The descriptive parameters are expected to be independent of each other and without hierarchical rank.This fourth interpretation of the paradigm’s descriptive system can be stated as follows: Proposition1d:Most competitive-strategy designs can be meaningfully positioned in the three-dimensional space described by(i)rela-tive emphasis on cost advantage;(ii)relative emphasis on differentiation advantage;and (iii)the market scope adopted.The paradigm’s theory of performanceThe fundamental theorem of the dominant para-digm is that above-average performance can only be achieved by adopting one of the four generic designs.Performance is defined as above-average rate of return(Porter,1980:35),sustained over a period of years(Porter,1985:11).This theorem is formalized in different ways,depending on the interpretation of the paradigm’s descriptive system.The dimensional interpretation is pri-marily concerned with defining the space in which competitive strategy designs may be described. To support the paradigm’s theoretical proposi-tions,some classification of designs within this space is required,using one of the other approaches.Taxonomic and empiricist approaches that attempt a comprehensive classification of all designs specify those classes with high-performance attributes:Proposition2a:Classes of competitive-strategy design will show above-average performance that are characterized by a distinctive emphasis,relative to competitors, on one of cost advantage,or differentiation advantage;and are either broad or focused in132 C.Campbell-Huntmarket scope.Only a small number of mixed-emphasis designs will show above-average performance.The class of designs that fail to achieve distinctive emphasis relative to competitors will record average or below-average performance.The nominalist approach does not attempt com-prehensive classification,but rather posits a small number of ideal types.Performance will improve as actual designs approximate these ideals: Proposition2b:The incidence of above-average performance will increase as competi-tive-strategy designs approach one of two fun-damentally different archetypes:one emphasiz-ing advantage from costs,the other from differentiation,each with broad and focused market-scope variants.Only a small number of mixed-emphasis designs will show above-average performance.As designs depart from these ideals and fail to achieve distinctive emphasis relative to competitors,they will re-cord average or below-average performance. As discussed above,measuring the distance between actual and ideal involves not only iden-tifying distinctive emphasis in terms of one ideal, but also measuring proximity to competitors’standard in the other.Both versions of the theory stem,as we have seen,from interpretations that emphasize the essentialist differences between strategies designed to support cost advantage and differen-tiation advantage.Failure to choose between them is theorized to violate their distinctive require-ments and to lead,in turn,to lower performance. In a similar way,failure to choose either a strategy adapted to a broad market scope,span-ning many segments,or one that focuses on one or a few segments,is theorized to produce lower performance.An important aspect of this choice is that it defines the scope of competitors against which thefirm seeks to be distinctive.Failure to define competitive scope results in poorly targeted designs and middling performance.The para-digm’s theory of performance is thus U-shaped with respect to market scope,positing higher performance when designs are well adapted to either broad or focused target markets,and aver-age or below-average performance for inter-mediate designs.The authors of the PIMS study pointed out that this U-shaped relationship with respect to market scope was not necessarily inconsistent with their clear result that perform-ance improves with market share,because PIMS defines share relative to thefirm’s‘served mar-ket,’and this can be either broad or focused in Porter’s terms(Buzzell and Gale,1987:85–86). For both Porter and PIMS,successful competitive strategies are likely to produce strong market share in the served market.STUDY ONE:META-DIMENSIONS OF COMPETITIVE STRATEGYThis section describes the meta-analytic method developed for this study,and applies it to summa-rizing the dimensions of competitive strategy,as described in the empirical record.Meta-analysis methodMeta-analysis is the term used to describe a structured,quantified analysis of a body of empirical literature on a theorized relationship. Relative to literatures in applied psychology and organization behavior from which meta-analysis emerged,use of these techniques has been slow to spread to management disciplines.Marketing has been an early adopter(Farley,Lehmann,and Sawyer,1995),and there are a handful of meta-analyses on relationships of interest to strategic management,i.e.,the effect of formal planning on performance(Schwenk and Shrader,1993); the association between industry concentration and performance(Datta and Narayanan,1989); the effect of mergers and acquisitions on share-holder wealth(Datta,Pinches,and Narayanan, 1992);and the influence of a number of proposed drivers on innovation(Damanpour,1991).Methods of meta-analysisSeveral meta-analytic methodologies have been developed(see Raju,Pappas,and Williams,1989, and Hunter and Schmidt,1990:468–489,for introductions to the main methods).A distinction can be drawn between those methods that seek to produce a consistent aggregation of the empirical evidence on a relationship,and those which further seek to draw inferences from these aggre-gations on the size and variance of relationshipGeneric Competitive Strategy133effects in a population.Among the most widely used methods,the meta-analysis introduced by Glass and colleagues(Glass,McGaw,and Smith, 1981)is of thefirst,descriptive,type(Hunter and Schmidt,1990:479);and that developed by Schmidt and Hunter(Hunter and Schmidt,1990) is of the inferential type.Inferential meta-analyses have become a powerful tool for reducing estimated variance in a parameter(Hunter and Schmidt,1990:485) and hence uncovering nonzero effect sizes which had formerly been hidden by type II errors in individual studies(Schmidt,1992).However,the benefits of inferential meta-analysis are gained at the cost of stringent requirements for the consis-tency of data(see Hunter and Schmidt,1990: 480–481),several of which are not met in the empirical literature on generic competitive strat-egy.First,it must be possible to interpret each study as a random sample from a population. Where one study reports more than one analysis on the same data(as in Hambrick,1983;Gal-braith and Schendel,1983;and Douglas and Rhee,1989),use of both analyses violates the independence assumption.Second,the studies must use the same variables in their specification of the relationship.Violation of this requirement is empirically important:failure to use identical model specification across studies has been found to represent the largest source of effect-size vari-ance in meta-analyses in marketing(Farley et al., 1995).Third,where regression coefficients(or factor coefficients)are to be used,cumulation into a meta-analysis requires that these be meas-ured using exactly the same scales(Hunter and Schmidt,1990:203–204).Noncomparability of scales and model speci-fication across studies is an inevitable feature of the comparative novelty of studies into competi-tive strategy,and its research designs.As shown above,Porter’s paradigm of generic competitive strategy has been cast in a number of different interpretations,and researchers have had good reason to expand the list of elements of competi-tive strategy they wish to include in their analysis, and have often devised their own scales for these constructs.Furthermore,the polythetic nature of the con-cept of generic competitive strategy suggests research designs involving principal component analysis and cluster analysis.As with regression coefficients,scale noncomparability across studies makes the use of factor coefficients in a Schmidt–Hunter type meta-analysis problematic.More fun-damentally,what is of interest in a meta-analysis of this literature is the cumulation of multivariate patterns of association between many elements of competitive strategy,and not one single effect size in a relationship.There are no established meta-analytics of the inferential type to deal with this situation.A descriptive meta-analytic procedure for factor and cluster analysisAlthough the barriers to an inferential meta-analysis appear insuperable at present,the methods developed for this study permit a descriptive meta-analysis of the empirical litera-ture on competitive strategy.By constructing a consistent aggregation of the patterns of competi-tive strategy design,the full weight of the empiri-cal record can be applied to assess the validity of the paradigm’s descriptive and theoretical propositions.Hence a descriptive meta-analysis is sufficient for the purpose of establishing a dia-logue between the dominant paradigm and the empirical record,and the further development of the paradigm.Also,the research questions posed by the paradigm attach greater importance to the existence or otherwise of multivariate patterns than to the degree of closeness in those patterns. The additional precision in estimation of effect sizes,which is an important advantage of inferen-tial meta-analyses,is of secondary importance here.Thefirst step in building the required meta-analysis is to produce consistent aggregates of the principal component solutions that are used to summarize and describe competitive strategy. Studies use principal component factor analysis to represent many elements of competitive strat-egy with a smaller number of factors,each of which represents an orthogonally independent dimension of competitive-strategy design(Kim and Mueller,1978).The estimated factor coef-ficients also identify those elements which are most closely associated with each dimension. The primary aim of a meta-analysis over sev-eral such studies should be to identify dimensions which best describe the totality of orthogonal factor solutions in the empirical literature.It is natural to refer to these as meta-dimensions of competitive strategy.The procedure assumes the134 C.Campbell-Huntpresence of an unknown number of these orthog-onal dimensions in the population of all competi-tive strategy designs.Each study’s factor solution is taken as a sample estimate of these population dimensions,and each study’s estimate of the elements most closely associated with each factor is taken as a sample estimate of the elements most closely associated with a meta-dimension. Because of the above-noted variability in con-structs and measures,cumulation of factor scores across studies is not meaningful.What amounts to a voting procedure is used instead.Each vector of factor coefficients reported in a study is transformed to a vector of‘votes’:elements which show significant nonzero coefficients on the factor are coded1;others coded0.Each vote vector is taken as a sample record that identifies those elements that are significantly associated with a meta-dimension of competitive strategy.Cluster analysis is then used to aggregate these multi-element vote vectors across studies into com-monly occurring patterns.Each cluster of similar vote patterns,indicating which elements of com-petitive strategy are most often associated together with an orthogonal factor,are taken as the best aggregate description of a meta-dimension that can be derived from the empirical literature.Taken together,the set of clusters describe the number of orthogonal meta-dimensions of competitive strategy that have been isolated.Finally,the incidence of‘votes’for each element clustered together in a meta-dimension is compared to its overall frequency,using as a metric the standard test statistic for differences in proportions.As is well known,the use of cluster analysis to create categories violates the assumptions required to use these statistics to draw inferences from the sample of studies to a population.As discussed below,other methods must be used to assess the validity of the cluster solution(Ketchen and Shook,1996).The statistic is used here for the simpler purpose of focusing the description of each meta-dimension on those elements of competitive strategy that are most distinctive of that dimension in the available empirical record.The method follows the same logic,in a multi-variate context,as the statistically correct bivari-ate procedure of vote counting,in which the proportion of studies with significant effect sizes is compared to that expected under a null hypoth-esis of no relationship between the variables (Hunter and Schmidt,1990:473).As Hunter and Schmidt note,a majority of nonzero effects is typically not needed to reject the null hypothesis, and it is the use of the majority criterion that is responsible for the errors associated with vote counting as a meta-analytic procedure.The focus of vote counting on the existence,rather than the effect size,of relationships is a recognized limitation of the method in univariate meta-analyses,but is appropriate for the purpose of isolating patterns of relationships,as in this case between elements of competitive strategy.One step in this procedure prohibits its use as an inferential meta-analysis,that is,the use of cluster analysis to aggregate vote vectors,and the consequent violation of the assumptions required to draw inferences to the population of all com-petitive strategies.Instead,the procedure produces a descriptive aggregation of the accumulated evi-dence on the independent dimensions of competi-tive strategy,as they have emerged in the empiri-cal record to date.More universal claims must await more powerful procedures.When assembling multiple studies into a meta-analysis,the question arises whether or not to weight each study by sample size.Monte Carlo simulations suggest that large samples are to be preferred for their lower exposure to artifactual variation(Koslowsky and Sagie,1994).On the other hand,when a meta-analysis includes studies that follow a skewed distribution of sample sizes, with outliers of very large or small samples, Osburn and Callender(1992)recommend the use of unweighted results,and conclude that there is little to be gained from sample-size weighting in most meta-analyses.The empirical literature on competitive strategy is highly skewed towards sample sizes of less than100,with a long tail reaching out to n=2578(see Table2).Accord-ingly,this meta-analysis uses vote vectors unweighted for sample size.Methods of clustering appropriate to this meta-analysisUse of cluster analysis in strategic management research has been critically reviewed by Ketchen and Shook(1996).They conclude that the design of these analyses must be careful to match the analysis to the type of data involved,and to assess the reliability of results by‘triangulating’the results。
英语作文-集成电路设计行业的行业标准与技术规范
英语作文-集成电路设计行业的行业标准与技术规范Integrated Circuit (IC) design is a pivotal sector within the broader semiconductor industry, governed by a set of rigorous standards and technical specifications. These standards ensure the reliability, performance, and interoperability of ICs across various applications, ranging from consumer electronics to industrial automation. This article explores the key industry standards and technical specifications that define the landscape of IC design today.At the heart of IC design standards lie the principles of functionality and reliability. Each IC must adhere to specifications that dictate its operational parameters, such as voltage levels, operating temperature range, and signal integrity. These specifications ensure that the IC functions correctly under diverse operating conditions, providing consistent performance throughout its lifespan.Moreover, IC design standards encompass the methodologies and practices that designers follow during the design phase. These include guidelines for layout, routing, and verification processes to guarantee manufacturability and yield optimization. By adhering to standardized design practices, semiconductor companies can streamline their development cycles and reduce time-to-market for new IC products.Furthermore, standards in IC design extend to the realm of packaging and testing. Packaging standards define the physical and electrical interfaces of ICs, ensuring compatibility with different assembly technologies and environmental conditions. Concurrently, testing standards dictate the procedures for validating IC functionality post-production, encompassing techniques like wafer probing and final testing to weed out defective units before shipment.In addition to performance and reliability, IC design standards emphasize the importance of power efficiency and scalability. Modern ICs are designed to minimize power consumption while maximizing computational throughput, catering to energy-conscious applications such as mobile devices and IoT (Internet of Things) sensors. Scalability standards enable IC designs to evolve seamlessly, accommodating future technological advancements without requiring substantial redesign efforts.Moreover, as ICs continue to integrate complex functionalities into smaller form factors, standards for electromagnetic compatibility (EMC) and electrostatic discharge (ESD) become increasingly critical. These standards mitigate interference issues and ensure robustness against transient electrical events, safeguarding ICs from operational disruptions and enhancing overall system reliability.The evolution of IC design standards is driven by collaborative efforts among industry consortia, regulatory bodies, and academic institutions. Standards organizations such as IEEE (Institute of Electrical and Electronics Engineers) and SEMI (Semiconductor Equipment and Materials International) play pivotal roles in defining and updating these standards to reflect technological advancements and market demands.In conclusion, the integration of IC design standards and technical specifications forms the cornerstone of a vibrant and innovative semiconductor industry. By adhering to these standards, semiconductor companies can deliver reliable, high-performance IC solutions that meet the stringent requirements of modern electronic applications. As technology continues to advance, adherence to standardized practices ensures that IC designs remain at the forefront of innovation, driving progress across diverse sectors worldwide.。
VHDL、Verilog,System verilog比较
Digital Simulation White Paper Comparison of VHDL,Verilogand SystemVerilogStephen BaileyTechnical Marketing EngineerModel TechnologyIntroductionAs the number of enhancements to various Hardware Description Languages (HDLs) has increased over the past year,so too has the com-plexity of determining which language is best for a particular design.Many designers and organi-zations are contemplating whether they should switch from one HDL to another.This paper compares the technical characteristics of three,general-purpose HDLs:•VHDL (IEEE-Std 1076):A general-pur-pose digital design language supportedby multiple verification and synthesis(implementation) tools.•Verilog (IEEE-Std 1364):A general-pur-pose digital design language supported bymultiple verification and synthesis tools.•SystemVerilog:An enhanced version ofVerilog.As SystemVerilog is currentlybeing defined by Accellera,there is notyet an IEEE standard.General Characteristics of the LanguagesEach HDL has its own style and heredity.The following descriptions provide an overall “feel”for each language.A table at the end of the paper provides a more detailed,feature-by-fea-ture comparison.VHDLVHDL is a strongly and richly typed language. Derived from the Ada programming language,its language requirements make it more verbose than Verilog.The additional verbosity is intend-ed to make designs self-documenting.Also,the strong typing requires additional coding to explicitly convert from one data type to another (integer to bit-vector,for example).The creators of VHDL emphasized semantics that were unambiguous and designs that were easily portable from one tool to the next.Hence, race conditions,as an artifact of the language and tool implementation,are not a concern for VHDL users.Several related standards have been developed to increase the utility of the language.Any VHDL design today depends on at least IEEE-Std 1164 (std_logic type),and many also depend on stan-dard Numeric and Math packages as well.The development of related standards is due to another goal of VHDL’s authors:namely,to pro-duce a general language and allow development of reusable packages to cover functionality not built into the language.VHDL does not define any simulation controlor monitoring capabilities within the language. These capabilities are tool dependent.Due to this lack of language-defined simulation control commands and also because of VHDL’s user-defined type capabilities,the VHDL community usually relies on interactive GUI environments for debugging design problems.VerilogVerilog is a weakly and limited typed language. Its heritage can be traced to the C programming language and an older HDL called Hilo.All data types in Verilog are predefined in the language.Verilog recognizes that all data types have a bit-level representation.The supported data representations (excluding strings) can be mixed freely in Verilog.Simulation semantics in Verilog are more ambiguous than in VHDL.This ambiguity gives designers more flexibility in applying optimiza-tions,but it can also (and often does) result in race conditions if careful coding guidelines are not followed.It is possible to have a design that generates different results on different vendors’tools or even on different releases of the same vendor’s tool.Unlike the creators of VHDL,Verilog’s authors thought that they provided designers everything they would need in the language.The more limit-ed scope of the language combined with the lack of packaging capabilities makes it difficult,ifnot impossible,to develop reusable functionality not already included in the language.Verilog defines a set of basic simulation control capabilities (system tasks) within the language. As a result of these predefined system tasks and a lack of complex data types,Verilog users often run batch or command-line simulations and debug design problems by viewing waveforms from a simulation results database.SystemVerilogThough the parent of SystemVerilog is clearly Verilog,the language also benefits from a propri-etary Verilog extension known as Superlog and tenants of C and C++ programming languages.SystemVerilog extends Verilog by adding a rich, user-defined type system.It also adds strong-typ-ing capabilities,specifically in the area of user-defined types.However,the strength of the type checking in VHDL still exceeds that in SystemVerilog.And,to retain backward compati-bility,SystemVerilog retains weak-typing for the built-in Verilog types.Since SystemVerilog is a more general-purpose language than Verilog,it provides capabilities for defining and packaging reusable functionality not already included in the language.SystemVerilog also adds capabilities targeted at testbench development,assertion-based verifica-tion,and interface abstraction and packaging.Pros and Cons of Strong TypingThe benefit of strong typing is finding bugs in a design as early in the verification process as pos-sible.Many problems that strong typing uncover are identified during analysis/compilation of the source code.And with run-time checks enabled, more problems may be found during simulation.The downside of strong typing is performance pilation tends to be slower as tools must perform checks on the source code. Simulation,when run-time checks are enabled,is also slower due to the checking overhead. Furthermore,designer productivity can be lower initially as the designer must write type conver-sion functions and insert type casts or explicitly declared conversion functions when writing code.The $1,000,000 question is this:do the benefits of strong typing outweigh the costs?There isn’t one right answer to the question.In general,the VHDL language designers wanted a safe language that would catch as many errors as possible early in the process.The Verilog language designers wanted a language that designers could -use to write models quickly.The designers of SystemVerilog are attempting to provide the best of both worlds by offering strong typing in areas of enhancement while not significantly impacting code writing and modeling productivity.Corporate Headquarters Model Technology8005 S.W. Boeckman Road Corporate Headquarters Mentor Graphics Corporation 8005 S.W. Boeckman Road Copyright © 2003 Mentor Graphics Corporation.This document contains information that is proprietary to Mentor Graphics Corporation and may be duplicated in whole or in part by the orig-inal recipient for internal business purposed only,provided that this entire notice appears in all copies.In accepting this document,the recipient agrees to make every reasonable effort to prevent the unauthorized use of this information.Mentor Graphics is a registered trademark of Mentor Graphics Corporation.All other trademarks are the property of their respective owners.For more information,call us or visit:SummaryWith all of the recent publicity surrounding lan-guages and standards,many people are wonder-ing where to go next.The answer to this question will vary greatly by designer and organization.In addition to the language feature comparison above,here are some final points to consider:•SystemVerilog is an emerging standard that is still evolving.With a compelling set of features,SystemVerilog is the likely migra-tion path for current Verilog users.However,widespread tool support won't be available until the specification stabilizes.•For VHDL users,many of theSystemVerilog and Verilog 2001 enhance-ments are already available in the VHDL language.There is also a new VHDLenhancement effort underway that will add testbench and expanded assertions capabil-ities to the language (the two areas where SystemVerilog will provide value over VHDL 2002).Considering the cost in changing processes and tools and the investment required in training,moving away from VHDL would have to be very carefully considered.。
计算机学科中的12个核心概念英文版
Below is a list of twel v e core c onc epts that per v ade the discipline and are independent of any particul ar technolog y. Each of them▪Occurs throughout the discipline▪Has a variety of instanti ations▪Has a high degree of tec hnological independenc e In addition to the three characteristics giv en abov e, most c ore c onc epts▪H ave instanti ations at the levels of theor y, abs traction and design▪H ave instanti ations in eac h of the nine s ubj ect areas▪O ccur generall y i n mathematics, scienc e and engineering1. BindingBinding is the process of mak i ng an abstr action more concrete by associ ating additional properties with it.Examples i nclude assigning a proc ess with a process or, ass ociating a ty pe with a variable name,associati ng a librar y object program with a s y mbolic referenc e to a subprogram, instanti ation in logicprogrammi ng, ass ociating a method with a message i n an object-oriented language, cr eating c oncrete instanc es from abstr act descrīptions.2. Complex ity of Large ProblemsThe effec ts of the nonlinear incr ease in c omplex ity as the size of a problem grows. T his is an important factor in distinguis hing and sel ecting methods that sc ale to different data sizes, problem spaces, andprogram sizes. In large programming pr ojects, it is a factor in determining the org anizati on of animplementation team.3. Conceptual and F ormat ModelsVarious way s of for malizing, characterizing, v isualizing and think ing about an idea or problem. Examples include for mal models in logic, switc hing theor y and the theory of c omputation, pr ogramming language paradigms bas ed upon for mal models, c onc eptual models s uch as abstrac t data ty pes and semantic data models, and v isual languages us ed in specify ing and designing s y stems, s uch as data flow andentity-relati ons hip di agrams.4. Consistenc y and C ompletenessConcrete realizations of the concepts of consistenc y and c ompleteness in c omputing, including related concepts s uch as correctness, robus tness, and reliability. Consistenc y i ncludes the consistenc y of a s et of ax i oms that s erve as a formal specification, the consistenc y of theor y to obser v ed fact, and internal consistenc y of a language or interfac e design. Correctness c an be v iewed as the consistenc y of component or s y stem behavīor to stated specificati ons. Completeness includes the adequa c y of a gi v en set of ax ioms to c apture all desired behavīors, functional adequac y of software and hardware s y s tems, and the ability of a s y stem to behave well under error conditi ons and unanticipated situations.5. Efficienc yMeasures of c ost rel ati v e to res ources s uch as s pac e, ti me, money and people. Examples i nclude the theoretical assess ment of the s pac e and time compl ex ity of an algorithm, feasi bility, the efficienc y with which a c ertain desirable res ult (suc h as the completion of a project or the manu facture of a component) can be achieved, and the efficienc y of a given i mplementation relati v e to alternati ve implementati ons.6. EvolutionThe fac t of c hange and its implications. The impact of c hange at all levels and the resilienc y and adequac y of abstracti ons, tec hniques and s y stems in the fac e of change. Ex amples include the ability of formal models to repr esent as pec ts of s y stems that v ar y with time, and the ability of a design to withs tand changing env ironmental demands and changing requirements, tools and facilities for configuration management.7. Lev els of Abstrac tionThe nature and use of abs traction in c omputing; the use of abstrac tion i n managing complex ity, structuring s y stems, hiding details, and c apturing recurring patterns; the ability to r epres ent an entity orsy stem by abstrac tions hav ing different lev els of detail and s pecificity. Ex amples include levels of hardware descrīption, levels of specificity within an obj ect hierarchy, the notion of generics inprogrammi ng languages, and the levels of detail prov ided i n a problem s olution from specificati ons though code.8. Ordering in SpaceThe concepts of loc ality and prox imity in the discipline of computing. In addition to phy sical loc ation, as in networ k s or memor y, this includes organizati onal l ocation (e.g., of pr ocessors, process es, type defi nitions, and associ ated oper ations) and conceptual loc ation (e.g., s oftware scoping, coupling, and c ohesion).9. Ordering in TimeThe concept of ti me i n the ordering of events. This incl udes time as a parameter in formal models (e.g., i n temporal l ogic), time as a means of s y nc hronizing pr ocesses that are spr ead out ov er space, time as an essential element in the ex ec ution of algorithms.10. R euseThe ability of a particul ar tec hnique, conc ept or s y stems to respond appropriatel y to be reused in a new contex t or situation. Ex amples include probability, the reus e of software libr aries and hardware components, technologies that promote reuse of software components, and language abstractions that promote the dev el opment of reus abl e software modul es.11. Sec urityThe ability of software and hardware s y stems to respond appropriatel y to and defend thems el v es against inappropriate and unanticipated requests; the ability of a c omputer installation to withstand catastrophic events (e.g., natural dis asters and attempts at sabotage). Ex amples include ty pe-chec k i ng and other concepts in programming languages that prov i de protection against mis use of data objects and functi ons, data encryption, granti ng and rev ok i ng of priv ileges by a databas e management s y stem, features in user interfac es that mini mize user errors, phy sic al security measur es at computer facilities, and sec uritymec hanis ms at various levels in a s y stem.12. Tradeoff and ConsequencesThe phenomenon of trade-offs in c omputing and the c ons equences of s uch trade-offs. The tec hnical, economic, cultural and other effects of s electi ng one design alternati ve over another. Trade-offs are a fundamental fact of life at all lev els and in all subject areas. Examples incl ude space-time trade-offs in the study of algorithms, trade-offs inherent in c onflicting design objecti v es (e.g., eas e of us e v ersus completeness, flex ibility versus simplicity, low cost versus high reliability and s o forth), design trade-offs i n hardware, and trade-offs i mplied in attempts to opti mize c omputing power in the face of a variety of constraints.。
集成电路设计专业名词解释汇总英文版
集成电路设计专业名词解释汇总英文版English:"Integrated Circuit (IC) Design: The process of creating a blueprint for the manufacturing of integrated circuits, such as microchips, using specialized software and tools. IC design involves several stages, including architectural design, logic design, circuit design, physical design, and verification. Architectural design establishes the high-level functionality and organization of the circuit, determining the overall structure and major components. Logic design involves the translation of the architectural design into a set of logic equations and functional blocks, specifying the logical operation of the circuit. Circuit design focuses on the actual implementation of the logic design, defining the electrical connections and components needed to achieve the desired functionality. Physical design, also known as layout design, involves the placement and routing of the components to ensure proper functioning and optimal performance, considering factors such as power consumption, signal integrity, and manufacturing constraints. Verification is the process of ensuring that the designed circuit meets the specified requirements and functions correctly under various conditions. Field-ProgrammableGate Array (FPGA): An integrated circuit that can be configured by the user after manufacturing. FPGAs contain an array of programmable logic blocks and interconnects, allowing for the implementation of various digital circuits. Hardware Description Language (HDL): A specialized programming language used to describe the behavior and structure of electronic circuits, facilitating the design and simulation of digital systems. Common HDLs include Verilog and VHDL. Electronic Design Automation (EDA) Tools: Software tools used in the design of electronic systems, including integrated circuits. EDA tools automate various stages of the design process, from schematic capture and simulation to layout and verification. Some popular EDA tools include Cadence Virtuoso, Synopsys Design Compiler, and Mentor Graphics Calibre. Very-Large-Scale Integration (VLSI): The process of integrating thousands or millions of transistors into a single chip. VLSI technology enables the creation of complex, high-performance integrated circuits, such as microprocessors and memory chips, by packing a large number of transistors into a small area. Application-Specific Integrated Circuit (ASIC): An integrated circuit customized for a particular application or purpose. Unlike FPGAs, ASICs are manufactured to perform a specific function, offering advantages in terms of performance,power consumption, and cost for mass production. ASIC design involves the development of custom circuitry optimized for a particular application, often using standard cell libraries and specialized design methodologies."中文翻译:"集成电路(IC)设计:是指利用专业软件和工具创建集成电路(如微芯片)制造的蓝图的过程。
Delphi中的泛型基础及简单应用
2010已发布很长时间了,口碑还不错,准备用它开发下一项目,但对泛型等新东西的认识还不够,就搜了一下,发现下面这篇文章,还不错,大家一起补补课吧!C++中的模板、C#等语言中泛型技术,给许多操作不同类型数据的软件人员提供了一个很好的方法。
其类型的“可变”性,让许多用过的软件人员所心喜。
但是在Delphi 2009以前的版本中,是从来没有的。
让许多不会用Delphi中TList的人员,大大的抱怨。
如果用好Delphi 中TList,其可用性,我个人认为,比其它语言中的泛型好用很多(当然对指针的应用和内存的分配、释放不了解的人除外)。
自从Delphi 2009的发布,给许多喜欢用泛型技术的软件人员,提供了方便。
由于Delphi 2009不太稳定,也没有过多的去用其泛型技术。
Delphi 2010发布以来,出现许多“Delphi 2010初体验,是时候抛弃Delphi 7了”的话语的满天飞,让我想一看其究竟。
闲话少说,Delphi 2010的泛型单元Generics.Defaults、Generics.Collections;重点还是Generics.Collections单元,其中有TArray泛型类、TList(列表的泛型)、TQueue(队列的泛型)、TStack(栈的泛型)、TDictionary (Hash Table哈希表的泛型)及其上述泛型所对应的TObject的泛型,非常广泛。
简单的泛型类应用:(转) -----------------------------------------------------------------------------------------------unit Unit1; interface uses Windows, Messages, SysUtils, V ariants, Classes, Graphics, Controls, Forms, Dialogs, StdCtrls; type TForm1 = class(TForm) Memo1: TMemo; Edit1: TEdit; Edit2: TEdit; Button1: TButton; Button2: TButton; Button3: TButton; Button4: TButton; procedure FormCreate(Sender: TObject); procedure FormDestroy(Sender: TObject); procedure Button1Click(Sender: TObject); procedure Button2Click(Sender: TObject); procedure Button3Click(Sender: TObject); procedure Button4Click(Sender: TObject); end; var Form1: TForm1; implementation{$R *.dfm}uses Generics.Collections; {Delphi 泛型容器单元}var Dictionary: TDictionary; {定义一个泛型TDictionary 类, 指定有Cardinal、string 构成}{建立}procedure TForm1.FormCreate(Sender: TObject); begin Dictionary := TDictionary.Create; Memo1.Clear; Button1.Caption := Button1.Caption + ' 添加'; Button2.Caption := Button2.Caption + ' 删除'; Button3.Caption := Button3.Caption + ' 尝试取值'; Button4.Caption := Button4.Caption + ' 清空'; Edit1.Clear; Edit2.Clear; Edit1.NumbersOnly := True; end; {释放}procedure TForm1.FormDestroy(Sender: TObject); begin Dictionary.Free; end; {添加}procedure TForm1.Button1Click(Sender: TObject); var key: Cardinal; value: string; str: string; k,v: Boolean; begin key := StrToIntDef(Edit1.Text, 0); value := Edit2.Text; if value = '' then value := 'Null'; k := Dictionary.Contains Key(key); {Key 是否存在}v := Dictionary.ContainsV alue(value); {V alue 是否存在}if not k then begin Dictionary.Add(key, value); Memo1.Lines.Add(Format('%d=%s', [key, value])); {同步显示}end; if k and not v then begin str := Format('key 已存在: %d=%s; 是否修改其值?', [key, Dictionary[key]]); if MessageBox(0, PChar(str), '提示', MB_OKCANCEL or MB_ICONQUESTION) = mrOk then begin//Dictionary[key] := value; {Dictionary[key] = Dictionary.Item[key]} Dictionary.AddOrSetV alue(key, value); {也可使用上一句} Memo1.Lines.V alues[IntToStr(key)] := value; {同步显示}end; end; if k and v then begin str := Format('%d=%s 已存在, 不能重复添加', [key, value]); MessageBox(0, PChar(str), '错误', MB_OK + MB_ICONHAND); end; Text := IntToStr(Dictionary.Count); end; {删除: Remove} procedure TForm1.Button2Click(Sender: TObject); var key: Integer; i: Integer; begin key :=StrToIntDef(Edit1.Text, 0); if not Dictionary.ContainsKey(key) then begin ShowMessageFmt('key: %d 不存在', [key]); Exit; end; Dictionary.Remove(key); Text := IntToStr(Dictionary.Count); {同步显示} i := Memo1.Lines.IndexOfName(IntToStr(key)); if i > -1 then Memo1.Lines.Delete(i); end; {尝试取值: TryGetV alue}procedure TForm1.Button3Click(Sender: TObject); var key: Integer; value: string; begin key := StrToIntDef(Edit1.Text, 0); if Dictionary.TryGetV alue(key, value) then ShowMessageFmt('key: %d 已存在, 其值是: %s', [key, value]) else ShowMessageFmt('key: %d 不存在', [key]) end; {清空: Clear}procedure TForm1.Button4Click(Sender: TObject); begin Dictionary.Clear; Text := IntToStr(Dictionary.Count); Memo1.Clear; {同步显示}end; end. -------------------------------------------------------------------------------- 自定义泛型应用:(转) -------------------------------------------------------------------------------- unit Unit1; interface uses Windows, Messages, SysUtils, V ariants, Classes, Graphics, Controls, Forms, Dialogs, StdCtrls; type TForm1 = class(TForm) Memo1: TMemo; Button1: TButton; Button2: TButton; Button3: TButton; Button4: TButton; Button5: TButton; procedure Button1Click(Sender: TObject); procedure Button2Click(Sender: TObject); procedure Button3Click(Sender: TObject); procedure Button4Click(Sender: TObject); procedure Button5Click(Sender: TObject); end; var Form1: TForm1; implementation{$R *.dfm}type TArr= array[0..9] of T; {定义一个泛型数组} {虽然大家习惯用T 来泛指其他类型, 但使用其他合法的标识符也是可以的}{用作Integer} procedure TForm1.Button1Click(Sender: TObject); var Arr: TArr; i: Integer; begin for i := Low(Arr) to High(Arr) do Arr[i] := i * i; Memo1.Clear; for i := Low(Arr) to High(Arr) do Memo1.Lines.Add(Format('Arr[%d] = %d', [i, Arr[i]])); end; {用作string}procedure TForm1.Button2Click(Sender: TObject); var Arr: TArr; i: Integer; begin for i := Low(Arr) to High(Arr) do Arr[i] := StringOfChar(Char(i+97), 3); Memo1.Clear; for i := Low(Arr) to High(Arr) do Memo1.Lines.Add(Format('Arr[%d] = %s', [i, Arr[i]])); end; {用作Single}procedure TForm1.Button3Click(Sender: TObject); var Arr: TArr; i: Integer; begin for i := Low(Arr) to High(Arr) do Arr[i] := 100/ (i+1); Memo1.Clear; for i := Low(Arr) to High(Arr) do Memo1.Lines.Add(Format('Arr[%d] = %f', [i, Arr[i]])); end; {用作记录TPoint}procedure TForm1.Button4Click(Sender: TObject); var Arr: TArr; i: Integer; begin for i := Low(Arr) to High(Arr) do Arr[i] := Point(i, i*2); Memo1.Clear; for i:= Low(Arr) to High(Arr) do Memo1.Lines.Add(Format('Arr[%d] = (%d,%d)', [i, Arr[i].X, Arr[i].Y])); end; {用作类TButton} procedure TForm1.Button5Click(Sender: TObject); var Arr: TArr; i: Integer; begin for i := Low(Arr) to High(Arr) do begin Arr[i] := TButton.Create(Self); Arr[i].Name := Concat('Btn', IntToStr(i+1)); end; Memo1.Clear; for i := Low(Arr) to High(Arr) do Memo1.Lines.Add(Format('Arr[%d] is %s', [i, Arr[i].Name])); end; end.一、概述等了几百年,Delphi终于原生的支持泛型了。
A Retargetable C compiler
A Research C# CompilerD AVID R.H ANSON AND T ODD A.P ROEBSTINGMicrosoft Research, 1 Microsoft Way, Redmond, WA 98052 USAdrh@ toddpro@SummaryC# is the new flagship language in the Microsoft .NET platform. C# is an attractive vehicle forlanguage design research not only because it shares many characteristics with Java, the currentlanguage of choice for such research, but also because it’s likely to see wide use. Languageresearch needs a large investment in infrastructure, even for relatively small studies. This paperdescribes a new C# compiler designed specifically to provide that infrastructure. The overalldesign is deceptively simple. The parser is generated automatically from a possibly ambiguousgrammar, accepts C# source, perhaps with new features, and produces an abstract syntax tree, orAST. Subsequent phases—dubbed visitors—traverse the AST, perhaps modifying it, annotating itor emitting output, and pass it along to the next visitor. Visitors are specified entirely atcompilation time and are loaded dynamically as needed. There is no fixed set of visitors, andvisitors are completely unconstrained. Some visitors perform traditional compilation phases, butthe more interesting ones do code analysis, emit non-traditional data such as XML, and displaydata structures for debugging. Indeed, most usage to date has been for tools, not for languagedesign experiments. Such experiments use source-to-source transformations or extend existingvisitors to handle new language features. These approaches are illustrated by adding a statementthat switches on a type instead of a value, which can be implemented in a few hundred lines. Thecompiler also exemplifies the value of dynamic loading and of type reflection.Keywords: Compiler architecture, abstract syntax trees, .NET, C# programming language, visitor pattern, ob-ject-oriented programmingIntroductionC# [1, 2] is the preeminent programming language in the Microsoft .NET platform. The .NET platform includes tools, technologies, and methodologies for writing internet applications [3]. It includes pro-gramming languages, tools that support XML web services, and new infrastructure for writing HTML pages and Windows applications. At its core are a new virtual machine and an extensive runtime envi-ronment. Compilers for C# and other .NET languages generate code for this virtual machine, called the .NET Common Intermediate Language or MSIL for short. MSIL provides a low-level, executable, type-safe program representation that can be verified before execution, much in the same way as the JavaVM [4] provides a verifiable representation for Java programs. It is, however, designed specifically to support multiple languages on modern processors.C# is a high-level, type-safe, object-oriented programming language. It has many of the same features as Java, but it also has language-level support for properties, events, attributes, and interoperability with other languages. C# also has operator overloading, enumerations, value types, and language constructs for iterating over collections.Java is often the language of choice for experimental programming language research. Research fo-cuses either on the Java VM or on changes to Java itself. Adding generics to Java is an example of the latter focus [5]. C# is an attractive platform for language research because it is in the same language ‘fam-ily’ as Java and because it is likely to become used widely. Microsoft’s C# is available on Windows and on FreeBSD as part of the Rotor [6] distribution, and the Mono Project [7] is developing a C# compiler for Linux, so language researchers seeking wide impact for their results may want to use C#. Also, C# will undoubtedly evolve over time and is thus open to future additions, so language research results mightfind their way into wide use. Again, adding generics to C# and to MSIL [8] is an example of the possibili-ties.Programming language research on new language features and their implementation requires a signifi-cant investment in a suitable compilation infrastructure. At the minimum, such work needs a compiler that accepts the full language, is easily changed, and can compile significant programs quickly. Besides the usual complexity inherent in all non-trivial compilers, there is a natural tension between flexibility and performance, both of the compiler itself and of the generated code. Wonderfully flexible compilers that accept only a subset of the language, are too slow, or produce incorrect or very inefficient code don’t get used. The same fate befalls compilers that generate highly optimized code but that are too complex to un-derstand and modify easily.This note describes a new C# compiler designed specifically for use in language research. The goal of this compiler, named lcsc (for local C sharp compiler), is to facilitate experiments with a wide range of language-level features that require new syntax and semantics. Examples include simple additions like a Modula-3 TYPECASE statement [9] to more exotic features like Icon iterators [10], program histo-ries [11], and dynamically scoped variables [12]. Lcsc is not designed for code generation research, per se, because it has no support specifically for native code generation, but there’s nothing in its design that precludes such work. By default, it emits MSIL.The design of lcsc is particularly simple and it is easy to modify and to extend, and it’s fast enough for experimental purposes, albeit perhaps an order of magnitude slower than the production C# compiler in .NET. However, implementation languages account for some of the speed difference: lcsc is written in C# and makes heavy use of automatic memory management and the .NET compiler is written in C++. Early experience with lcsc confirms that it is easy to extend and to modify, particularly if the new features can be modeled in C# itself.Perhaps surprisingly, most of the use of lcsc to date is for C# language tools and program analysis, and not for language research. Of course, there are more tool developers than language researchers, but this usage was not anticipated and was not factored into the design. The end result is, however, that lcsc’s de-sign, which is based on abstract-syntax trees, is a good infrastructure for language-level tools. Using a similar approach for other languages might harvest similar benefits.DesignFrom 30,000 feet, lcsc’s design is dead simple: The parser reads the C# source input and builds an ab-stract syntax tree [13], or AST, and subsequent phases, such as type checking and code generation, trav-erse the AST, perhaps leaving annotations on its nodes. The details are, of course, more involved, but the basic design dictates little beyond the ASTs.The design is intentionally lean: there is no explicit support for parsing fragments of the grammar, in-cremental compilation, etc., and there is no GUI interactive development or support for using one. Some of these features could be provided by adding new start symbols to the grammar and specialized AST tra-versals, and the Eclipse Platform [14] may be an appropriate development environment infrastructure.To date, lcsc has been used only for experiments and tools involving C#. While its components are specific to C#, they could, in principle, be revised for other, similar languages.ParsingThe parser is generated automatically from a C# BNF grammar that is nearly identical to the grammar given in the language specification [1]. This grammar is ambiguous, and the home-grown parser generator accepts ambiguous grammars. The generated parser is a generalized LR parser [15, 16]; for inputs that have ambiguous parses, the parser builds multiple parse trees. The number of parse trees is theoretically unbounded, but for practical programming language grammars, there are few alternatives, which can usu-ally be resolved by inspecting the alternative subtrees, as described below.The parser generator also emits code to build an AST bottom-up from the parse trees. Code fragments that return AST nodes are associated with each production in the grammar, as exemplified by the produc-tions for the if-statement:if-statement if(boolean-expression)embedded-statementstatement new if_statement(a3,a5,null) if-statement if(boolean-expression)embedded-statement else embedded-statementstatement new if_statement(a3,a5,a7)In the productions, nonterminals appear in italics and terminals in a typewriter font. The code frag-ment—the ‘action’ in grammar parlance—appears on the far right in the lines following the productions. The occurrences of statement identify the abstract type of the AST nodes returned by the new expres-sions. The a3, a5, and a7 refer to the AST values returned by the corresponding grammar symbols in the rule in the order in which they occur, e.g., boolean-expression, and the two occurrences of embedded-statement.Ambiguities are resolved during AST construction by examining alternative parse trees. If a parse tree node has more than one alternative, the set of alternatives is passed to the user-defined method resolve, which inspects the alternatives and perhaps the context in which they occur, chooses one, and returns the appropriate AST node. While this ad hoc approach does require some iteration to discover and handle all the ambiguities, its cost is too small to warrant more sophisticated mechanisms [17]. Per-node resolution code is short, usually less than a dozen lines; the C# if-statement takes 17 lines and is one of the most complicated. The entire body of resolve for C# is only 114 lines.A novelty of the parser generator is that it reads the grammar specification from Excel spreadsheets, which eliminates much of the code that parses the grammar specification in other generators, provides some error checking, and serves as a simply GUI to the grammar. Nonterminals, rules, types, and actions each appear in separate columns, one production per row. The type column, which holds statement in the if-statement example above, is computed using Excel formulas. A separate ‘sheet’ in the spreadsheet lists the AST type for each nonterminal, and formulas are used to propagate those types into the third col-umn of the 476 productions in the grammar sheet. Separate sheets are also used to list keywords and op-erator tokens.The types sheet also gives the default AST expression for each nonterminal, which is used when an op-tional occurrence of a nonterminal appears in a rule. For example,block{statement-list opt} statement new block_statement(a2) specifies that a block is an optional statement-list enclosed in braces. If statement-list is omitted, its de-fault from the types sheet (a zero-length statementList) is supplied as the value of a2. Incidentally, optional elements are written exactly as this example shows, with a subscript ‘opt’.Using an Excel spreadsheet makes the grammar easy to augment. Additional columns can be added for, say, comments or annotations for other tools, without modifying the parser generator, which inspects only specific columns. Finally, the parser generator accepts one or more spreadsheets, so language exten-sions can be specified in a separate spreadsheet without modifying the core C# grammar.There are, of course, disadvantages to using Excel. Even viewing the grammar requires running Excel. While it’s easy to extract the spreadsheet data as plain text in several formats, some are idiosyncratic. The 330-line parser-generator module that extracts the grammar is thus Excel-specific and cannot be reused easily. Packaging the grammar and its associated data as XML is an attractive alternative, in part because XML is ubiquitous and platform independent, and there are many XML-related tools. A good XML editor could, for example, provide a GUI for grammar and structured editing much as Excel does.SemanticsThe parser returns a single AST for each input source file. For multiple source files, these ASTs are stitched together into a single AST for subsequent processing. Semantic processing is specified entirely by command-line options, which specify AST ‘visitors’ and the order in which they are invoked. Visitors are packaged as separate classes in .NET dynamically linked libraries, DLLs for short. For example,lcsc –visitor:XML foo.csparses foo.cs and passes the resulting AST to the XML class in XML.dll . By convention, the AST ispassed to the static method visit ; there are provisions for passing additional string arguments, when necessary, and for specifying the DLL file name. Visitor classes [18] traverse the AST, perhaps annotating it, transforming it, or emitting output, and re-turn the AST for subsequent passes. For example, traditional compilation is accomplished bylcsc –visitor:bind –visitor:typecheck –visitor:rewrite –visitor:ilgen foo.cswhich parses foo.cs , binds names, type checks, rewrites some ASTs, and generates and assembles MSIL code. The bind visitor hangs symbols on the AST nodes and collects symbols into symbol tables, and typecheck uses these data to compute the types for expressions and drops types on those nodes. Re-write modifies some ASTs to simplify code generation, e.g., it turns post-increment expressions into addi-tions and assignments. Finally, most of ilgen simply emits MSIL code, but even ilgen annotates some nodes: it assigns internal label numbers to loop and iteration statements.Dynamic loading permits visitors to access data structures built by other visitors directly. Without dy-namic loading, the AST and its associated data structures would have to serialized and passed via files or pipes to subsequent visitors. This approach has been used often by us and others (see, for example, Ref. [19]), but it’s slower, can induce restrictions on data structures, and may be prone to serialization errors. From a packaging viewpoint, dynamic loading is much simpler.There are 164 AST node classes, 16 of which are abstract classes. For example, statement and ex-pression are abstract classes. The ASTs describe the source-level structure of the input program; the class for an if-statement is typical:public class if_statement: statement { public if_statement(expression expr, statement thenpart, statement elsepart) { this.expr = expr; this.thenpart = thenpart; this.elsepart = elsepart; } public expression expr; public statement thenpart; [MayBeNull] public statement elsepart; public override void visit(ASTVisitor prefix, ASTVisitor postfix) {…} }An if_statement node is created by a new expression, which calls the constructor to fill in the fields. The visit method is described below. Nodes with multiple children use type-safe lists that can be in-dexed like arrays, e.g., the –List types inpublic class class_declaration: declaration { public class_declaration(IList attrs, IList mods, InputElement id, IList bases, IList body) { … } public attribute_sectionList attrs; public typeList bases; public declarationList body;public InputElement id;…}InputElement s are tokens and identify the token type, the specific token instance, and its location in the source code.Much of the code in a visitor is boilerplate traversal code. Included with lcsc is mkvisitor, a tool that uses type reflection to generate a complete visitor that can be subsequently edited to suit its specific task. For example,mkvisitor –args ″SymbolTable bindings″ >visitor.csgenerates the following C# code for the if_statement class shown above.void if_statement(if_statement ast, SymbolTable bindings) {bindings);expression(ast.expr,bindings);statement(ast.thenpart,if (ast.elsepart != null)bindings);statement(ast.elsepart,}The [MayBeNull] annotation on the field elsepart in the declaration of if_statement above is a C# attribute, and it indicates that the field may be null. These kinds of attributes are used in mkvisitor and other program generation tools to emit appropriate guards to avoid traversing valid null ASTs. Typi-cal visitors run around 2000 lines of C# of which about 830 are generated by mkvisitor.Mkvisitor uses type reflection to discover the AST vocabulary, but this approach is not, of course, the only one. Mkvisitor could read the AST source code, or it could read some other specification of the ASTs. But using reflection is perhaps the easiest of these approaches and it fits well into the lcsc build process: Whenever the AST source code is revised and recompiled, mkvisitor gets the revised definitions automatically.While the full visitor machinery is used for most compilation passes, there is a simpler mechanism that is useful for one-shot applications. Each AST node includes a visit method that implements both a pre-fix and postfix walk. For instance, the visit method in if_statement ispublic override void visit(ASTVisitor prefix, ASTVisitor postfix) {prefix(this);expr.visit(prefix,postfix);postfix);thenpart.visit(prefix,if (elsepart != null)postfix);elsepart.visit(prefix,postfix(this);}The arguments prefix and postfix are instances of ASTVisitor delegates, which are essentially type-safe function pointers. These delegates are particularly useful for tools that need to examine only parts of the AST or that look for specific patterns. For example, C# statements of the form if (…) throw …mimic assertions. The expressionast.visit(new ASTVisitor(doit), new ASTVisitor(donothing));finds occurrences of this pattern in an AST rooted at ast, where doit and donothing are defined as follows.public static void doit(AST ast) {if (ast is if_statement&& ((if_statement)ast).thenpart is throw_statement&& ((if_statement)ast).elsepart == null)Console.WriteLine("{0}: Possible assertion", ast.begin);}public static void donothing(AST ast) { }The begin and end fields in an AST give the beginning and ending locations in the source code spanned by the AST.ApplicationsFinding code fragments that look like assertions typifies the use of lcsc to find patterns in C#. Elabora-tions of this usage are used for code-auditing tools, for example. Pattern matching on an AST instead of text makes it easy to consider context and to fine tune matches to avoid voluminous output.XML is an increasing popular way to exchange data, and there are numerous tools available for editing and viewing XML. The XML visitor emits an AST as XML for consumption by XML-based tools or ex-ternal compilation tools that accept XML. C# includes extensive support for reflection, which makes it possible to discover class information during execution. The 70-line XML visitor uses reflection to dis-cover the details of the AST classes necessary to emit XML and is thus automatically upward compatible with future additions to the AST vocabulary.The visitor architecture helps write metaprogramming tools, e.g., tools that write programs. The .NET platform includes a tree-based API for creating programs, typically those used in web services applica-tions. This API defines a language-independent code document object model, also known as CodeDOM. It’s possible, for example, to build a CodeDOM tree and pass it to C#, Visual Basic, or to any language that offers a ‘code provider’ interface. A common approach to writing CodeDOM applications is to write, say, C# source code and translate it by hand into the API calls that build the CodeDOM tree for that C# code. The lcsc codedom visitor automates this process: Given a C# program P, it emits a C# program that, when executed, builds the CodeDOM tree for P.The source visitor is similarly useful: It emits C# source code from an AST. When coupled with visi-tors that modify the AST, source provides a source-to-source transformation facility. As detailed in the next section, source is useful for C# language extensions that can be modeled in C# itself. It’s also useful for writing code reorganization tools. A simple example is sortmembers, which alphabetizes the fields and methods in all of the classes in its input C# program. For instance, the commandlcsc –visitor:sortmembers –visitor:source old.cs >new.cssorts the class members in old.cs and writes the C# output to new.cs. The sortmember visitor simply rearranges the declaration subtrees in each class declaration and lets source emit the now-sorted results.A limitation of the source visitor is that it cannot reproduce the C# input exactly, because the ASTs do not include comments and white space. So, source can’t be used as a pretty printer for example, or for applications that demand full comment fidelity, e.g., systems that generate code documentation from structured comments. Comments and white space could, in principal, be associated with AST nodes.Incidentally, source turns out to document the ASTs nicely, because, for each node type, it shows how to emit the corresponding C# source. Writers of new visitors often start with a copy of source and edit to suit their own purposes. At any point, the output shows exactly how much progress has been made—C# source appears for those node types whose visitor methods remain to be edited.Visitors are also used for diagnostic purposes, e.g., to help debug other visitors. The display visitor renders its input AST as HTML and launches the web browser to display the result. Using reflection, dis-play lists the fields in each class instance with hyperlinks to those fields that hold references to other classes. Figure 1 shows the class_declaration AST node for the following prototypical ‘hello world’ C# program.class Hello {public static void Main() {System.Console.WriteLine("Hello, world");}}The hyperlinks, shown underlined, make it easy to traverse the data structure by clicking on the links. Display handles lists as well as AST types, and it omits empty lists, which occur frequently.class_declaration#22:attrs=attribute_sectionList#33 (empty)bases=typeList#35 (empty)body=declarationList#37id=InputElement{coord=hello.cs(3,7),str=Hello,tag=identifier}sym=ClassType#43mods=InputElementList#45 (empty)parent=compilation_unit#12Figure 1. Sample display output.Because display uses reflection, it can display any class type; for example, the sym field in Figure 1 re-fers to a ClassType, which is a class type used to represent C# types. This capability helps debug visi-tors that annotate the AST. For instance, the commandlcsc –visitor:bind –visitor:display–visitor:displayfoo.cs–visitor:typecheckbuilds the AST for foo.cs, runs bind, displays the AST with bind’s annotations (which includes the sym field shown in Figure 1), runs typecheck, then displays the AST again with typecheck’s additions.For even medium size C# programs, display generates a large amount of HTML. While navigating the AST is straightforward, it is often difficult to correlate a specific AST node with its associated source text. The browser visitor is a more ambitious variant of display that provides a more natural ‘navigation’ mechanism. Browser displays the C# source text in one window and AST nodes in another one. The AST nodes are rendered as done by display. Highlighting a fragment of source text causes the root of smallest enclosing AST subtree to appear in the AST window. Just that subtree can be explored by clicking the field values. Another variant of browser provides a similar mechanism for exploring the generated MSIL code: Highlighting a source code fragment displays the corresponding MSIL code in another window. Adding Language FeaturesImplementing new language features has much in common with writing code analysis tools, and lcsc’s visitor-based design facilitates such additions. For example, the source visitor often does half the work for new features that can be modeled in vanilla C#. More ambitious features can be implemented by building on the existing visitors, either by calling them explicitly or by subclassing them.Adding a typeswitch statement provides an example of both approaches. The typeswitch statement is a case statement that branches on the type of an expression instead of on its value. For example, typeswitch (o) {case Int32 (x): Console.WriteLine(x); break;case Symbol (s): Symbols.Add(s); break;case Segment: popSegment(); break;default: throw new ArgumentException();}switches on the type of o. The typeswitch cases can also introduce locals of the case label type, as illus-trated by the Int32 and Symbol cases, which introduce locals x and s. Figure 2 gives the grammar for the typeswitch statement, which is based on the Modula-3 TYPECASE statement [9].typeswitch-statementtypeswitch(expression)typeswitch-block new typeswitch_statement(a3,a5) typeswitch-block{typeswitch-sections opt}a2typeswitch-sectionstypeswitch-section typeswitch_sectionList.New(a1)typeswitch-sections typeswitch-section List.Cons(a1,a2)typeswitch-sectioncase type(identifier):statement-list new typeswitch_section(a2,a4,a7)typeswitch-labels statement-list new typeswitch_section(a1,a2)default:statement-list new typeswitch_section(a3) typeswitch-labelstypeswitch-label switch_labelList.New(a1)typeswitch-labels typeswitch-label List.Cons(a1,a2)typeswitch-labelcase type:new typeswitch_label(a2)Figure 2. Typeswitch syntax.Typeswitch can be implemented in C# using if and goto statements, local variables, and typecasts to convert the typeswitch expression to the types indicated. For example, the typeswitch fragment above could be translated mechanically as follows.{object yy_1 = o;if (yy_1 is Int32) {Int32 x = (Int32)yy_1;Console.WriteLine(x);goto yy_1_end;}if (yy_1 is Symbol) {Symbol s = (Symbol)yy_1;Symbols.Add(s);goto yy_1_end;}if (yy_1 is Segment) {popSegment();goto yy_1_end;}throw new ArgumentException();yy_1_end: ;}A source-to-source transformation is thus one way to implement typeswitch. There are several alterna-tives. One approach is to write a visitor that transforms the typeswitch subtrees into the corresponding block and if statement trees as suggested by the output above, and use the source visitor to emit the trans-formed AST. A simpler approach, however, is to extend the source visitor to accept typeswitch subtrees and emit the if statement implementation directly.In either case, the first step is to define the typeswitch grammar, which appears in Figure 2. This grammar resides in its own Excel spreadsheet and is passed to the parser generator along with the original C# grammar. The second step is to define the AST types needed to represent typeswitch statements and to add the appropriate types and actions to the grammar spreadsheet. There are three new types, and the ac-tions are shown to the right of the productions in Figure 2. The type column is omitted.The final step is to add methods to the source visitor to traverse the three typeswitch AST types, emit-ting C# code as suggested above. It is not necessary to change the source visitor class itself; it can simply be extended by a derived class, typeswitch_source, that implements three new methods. It also overrides the methods in source for break statements and default labels, because these constructs can now appear in both switch and typeswitch statements. These steps take a total of only 153 lines:19 lines typeswitch grammar63 typeswitch AST nodes71 extend source153 TotalMore exotic language features and those that cannot be modeled in C# require more implementation effort. The effort can sometimes be reduced by transforming a part of the feature into existing C# AST nodes to make use of existing visitor code. But many additions require all of the typical compilation steps, including binding, typechecking, and code generation. Typeswitch again provides an example.The first two steps are the same as in the source-to-source approach: specify the grammar and define the typeswitch-specific AST node types.Each of the four traditional compilation visitors must be extended to traverse the three typeswitch nodes. Again, this extension can be done by subclassing, and defining new methods and overriding exist-ing methods. The complete implementation takes 333 lines:19 lines typeswitch grammar63 typeswitch AST nodesbind100 extend37 extend typecheck39 extend rewrite65 extend ilgen333 TotalBind requires the most code because it makes three passes over the AST and defines a class for each pass. Code generation is performed by rewrite and ilgen, which could be combined into a single visitor.Note that in both approaches the additional code required is approximately proportional to the ‘size’ of the typeswitch addition. For many additions, lcsc provides scalable extensibility [20]; that is, the addition requires effort proportional to its syntactical and semantic size relative to the base C# language.Of course, lcsc’s unconstrained approach does not guarantee scalable extensibility, so other language features could take much more effort. Substantial features typically add 1000s of lines instead of 100s. For example, an implementation of futures for C# took about 1000 lines. But even a complete visitor is much less effort than implementing a complete compiler or modifying a production compiler, and the visitor-based design enforces a modularity that helps avoid errors.Lcsc’s design is not a solution to the extension problem, viz., the code-modification conflicts that can arise when adding both language features and tools. As the typeswitch example shows, inheritance helps when an addition requires new AST classes, but more complicated additions may require more complex subclasses. Most lcsc visitors annotate the AST nodes with analysis values, e.g., types, so new visitors。
PCB从设计,制造到成品各个过程涉及到的IPC的标准
Printed Board Dimensions and Tolerances 印制板尺寸和公差
Design Guide for Protection of Printed Board Via Structures 线路板过孔结构设计指南
PCB acceptance
Fabrication
Sectional Design Standard for Organic Multichip Modules (MCM-L) and MCM-L Assemblies 有机多芯片模块(MCM-L)及其组装件设计分标准
高密度互连(HDI)印制板设计分标准
Design Guide for RF/Microwave Circuit Boards 射频/微波电路板设计指南
Doc Number IPC-L-125
IPC-SM840
IPC-1730 IPC-4101 IPC-9191
Doc Name
Specification For Plastic Substrates Clad Or Unclad For High Speed/ High Frequency Interconnections 用于高速/高频互连的包履或非包履塑料基板技术 规范 Qualification and Performance Specification of Permanent Solder Mask 永久焊料掩膜的质量和性能 胶合机鉴定曲线(LQP)
PCB从设计到成品涉及到的IPC的标准
专业PCB制造商 366812121@
1
Detail Detail Detail
Design
Assembly Interfaces
软件工程(双语)复习提纲
Chapter 1 An Introduction to Software Engineering*What is software?—Computer programs and associated documentation and Data-Two fundamental types of software product: generic products and customized products*What is software engineering?—Software engineering is an engineering discipline which is concerned with all aspects of software production*What is the difference between software engineering and computer science?—Computer science is concerned with theory and fundamentals;—software engineering is concerned with the practicalities of developing and delivering useful software*What is a software process?-A set of activities whose goal is the development or evolution of software—Generic activities in all software processes are:•Specification 、Development 、Validation 、EvolutionChapter 4 Software Process*Software process-Software processes are the activities involved in producing and evolving a software system。
2013.8-FDA指南:ANDA:原料药和制剂稳定性试验问答(中英文)
201308 FDA指南:ANDA:原料药和制剂稳定性试验问答(中英文)Guidance for Industry 行业指南ANDAs: Stability Testing of Drug Substances and ProductsQuestions and AnswersANDA:原料药和制剂稳定性试验问答DRAFT GUIDANCE指南草案This guidance document is being distributed for comment purposes only.本指南文件发布仅供讨论。
Comments and suggestions regarding this draft document should be submitted within 60 days of publication in the Federal Register of the notice announcing the availability of the draft guidance. Submit electronic commentsto . Submit written comments to the Division of Dockets Management (HFA-305), Food and Drug Administration, 5630 Fishers Lane, rm. 1061, Rockville, MD 20852. All comments should be identified with the docket number listed in the notice of availability that publishes in the Federal Register. For questions regarding this draft document contact (CDER) Radhika Rajagopalan 240-276-8546.U.S. Department of Health and Human ServicesFood and Drug AdministrationCenter for Drug Evaluation and Research (CDER)August 2013GenericsGuidance for IndustryANDAs: Stability Testing of Drug Substances and ProductsQuestions and AnswersANDA:原料药和制剂稳定性试验问答Additional copies are available from:Office of CommunicationsDivision of Drug Information, WO51, Room 2201Center for Drug Evaluation and ResearchFood and Drug Administration10903 New Hampshire Ave., Silver Spring, MD 20993Phone: 301-796-3400; Fax: 301-847-8714druginfo@/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/default.htm U.S. Department of Health and Human ServicesFood and Drug AdministrationCenter for Drug Evaluation and Research (CDER)August 2013Generics Contains Nonbinding Recommendations Draft — Not for ImplementationTABLE OF CONTENTSI. INTRODUCTION 介绍II. QUESTIONS AND ANSWERS 问与答A. General 一般问题B. Drug Master File 药物主文件.C. Drug Product Manufacturing and Packaging 药品生产和包装D. Amendments to Pending ANDA Application 未批准ANDA申请的增补E. Stability Studies 稳定性试验.Guidance for Industry[1]ANDAs: Stability Testing of Drug Substances andProductsQuestions and AnswersANDA:原料药和制剂稳定性试验问答This draft guidance, when finalized, will represent the Food and Drug Administration’s (FDA’s) current thinking on this topic. It does not create or confer any right s for or on any person and does not operate to bind FDA or the public. You can use an alternative approach if the approach satisfies the requirements of the applicable statutes and regulations. If you want to discuss an alternative approach, contact the FDA staff responsible for implementing this guidance. If you cannot identify the appropriate FDA staff, call the appropriate number listed on the title page of this guidance.本指南草案,如果最终定稿,代表的是FDA目前对这一专题的态度。
logiCLK 可编程时钟发生器 规格书说明书
October 17th, 2018 Data Sheet Version: v1.5Fallerovo setaliste 2210000 Zagreb, CroatiaPhone: +385 1 368 00 26Fax: +385 1 365 51 67E-mail: ***********************URL: Features∙Supports Xilinx® 7 Series, UltraScale TM andUltraScale+TM SoCs and FPGAs∙Provides twelve independent clock outputs thatcan be configured by generic parameters:- Six outputs can be dynamically configuredthrough register interface during operation- Six outputs can be configured by genericparameters only∙Supports phase-locked loop (PLL) and mixed-mode clock manager (MMCM). The user canselect clock primitive through IP configurationGUI∙Selectable output buffer type on clock outputports: BUFG, BUFH and no buffer∙Configurable through AXI4-Lite interface∙Software support for Linux and Microsoft® Windows® Embedded Compact operating systems∙Available for Xilinx Vivado® IP IntegratorTable 1: Example Implementation Statistics for Xilinx® FPGAsthree clock outputs from logiCLK are used.2) Assuming register interface, as well as status signals and generated clock outputs are connected internally.3) The same implementation statistics apply to the Xilinx 7 series FPGAs.4) The same implementation statistics apply to the Xilinx UltraScale and UltraScale+ FPGAs.RST_OUTLOCKEDCLK0CLK1CLK2CLK3CLK4CLK5Figure 1: logiCLK ArchitectureFeatures (cont.)∙Input clock frequency range*:- 7 series PLLs: 19 – 1066 MHz- 7 series MMCMs: 10 – 1066 MHz- UltraScale and UltraScale+ PLLs: 70 – 1066 MHz- UltraScale and UltraScale+ MMCMs: 10 – 1066 MHz∙Output clocks frequency range*:- 7 series PLLs: 6.25 – 741 MHz- 7 series MMCMs: 4.69 – 1066 MHz- UltraScale PLLs: 4.69 – 850 MHz- UltraScale MMCMs: 4.69 – 850 MHz- UltraScale+ PLLs: 5.86 – 891 MHz- UltraScale+ MMCMs: 6.25 – 891 MHz* Depending on the sub-family and device’s speed grade. Please consult the corresponding family data sheet. General DescriptionThe logiCLK Programmable Clock Generator IP core from the Xylon logicBRICKS IP core library is optimized for Xilinx 7 Series, UltraScale and UltraScale+ SoC and FPGA devices, and designed to provide frequency synthesis, clock network de-skew and jitter reduction. Input and output frequency ranges are restricted by PLL, MMCM and Clock Buffer switching characteristics of the specific Xilinx All Programmable device.The logiCLK clock generator IP core has twelve independent and fully configurable clock outputs. While six clock outputs can be fixed by generic parameters prior to the implementation, the other six clock outputs can be either fixed by generics or dynamically reconfigured in a working device. The Dynamic Reconfiguration Port(DRP) interface gives system designers the ability to change the clock frequency and other clock parameters while the design is running by mean of a set of memory-mapped PLL/MMCM configuration and status registers (Figure 1).The ability to dynamically change the clock signals during the operation is an important feature for some SoC applications. For example, the logiCLK IP core enables precise clock adjustments necessary for driving display output with different resolutions, which would be otherwise impossible without an external programmable Phase-Locked Loop (PLL)/Mixed-Mode Clock Manager (MMCM) device.Xylon uses the logiCLK IP core in free and pre-verified Graphics Processing Unit (GPU) reference designs prepared for popular Xilinx Zynq-7000 AP SoC based development kits. Please visit Xylon web site to see the full list of logicBRICKS reference designs:/logicBRICKS/Reference-logicBRICKS-Design.aspxXylon’s software drivers for graphics logicBRICKS IP cores, such as the logiCVC-ML display controller IP core, include support for the logiCLK Programmable Clock Generator IP core and enable its easy use with the Linux and Microsoft Windows Embedded Compact operating systems.Functional DescriptionThe Figure 1 represents internal logiCLK architecture. The logiCLK functional blocks are Dynamic Reconfiguration Parameters module and Registers.Dynamic Reconfiguration Parameters moduleDynamic Reconfiguration Parameters module is an optional IP core’s module that provides six output clocks (CLK_DRP) defined by a set of re-configurable parameters. It is designed in accordance to the Xilinx Application Note XAPP888: “MMCM and PLL Dynamic Reconfiguration” for 7 Series, UltraScale and UltraScale+ FPGAs. RegistersThe CPU has access to logiCLK’s registers through AXI4-Lite bus interface.Core ModificationsThe core is supplied in encrypted VHDL format compatible with Xilinx Vivado IP Integrator. Many logiCLK configuration parameters are selectable prior to VHDL synthesis, and the following table presents a selection from a list of the available parameters:Table 2: logiCLK VHDL configuration parametersNotes:1. Refer to Xilinx Clocking Resources User guides depending on the used device’s family.2. These parameters are used to configure CLK_DRP outputs if dynamic reconfiguration enable bit in control register is cleared, or ifdynamic reconfiguration enable bit in control register set and data address is “0”.Core I/O SignalsThe core signal I/Os have not been fixed to specific device pins to provide flexibility for interfacing with user logic. Descriptions of all signal I/Os are provided in Table 3.Table 3: Core I/O SignalsVerification MethodsThe logiCLK is fully supported by the Xilinx Vivado Design Suite. This tight integration tremendously shortens IP integration and verification. A full logiCLK implementation does not require any particular skills beyond general Xilinx tools knowledge. This IP core has been successfully validated in different designs.Software driversXylon Linux Framebuffer driver includes the software support for the logiCLK IP core. For more information, please get the Linux Framebuffer User’s Manual:URL: /Documentation/Datasheets/SW/Xylon-Linux-FrameBuffer.pdfXylon logiDISP driver for Microsoft Windows Embedded Compact includes the software support for the logiCLK IP core. For more information, please visit:URL: /Products/Xylon-Windows-Embedded-Display.aspxRecommended Design ExperienceThe user should have experience in the following areas:-Xilinx design tools-ModelSimAvailable Support ProductslogiREF-ZGPU-ZED Reference Design –evaluate 2D and 3D logicBRICKS graphics on the ZedBoard from Avnet Electronics Marketing with connected PC monitor. Deliverables include complete software support for Linux OS, from the basic Framebuffer up to the full 3D graphics. Configurable IP cores enable customization of the evaluation hardware, which can also be used with other popular operating systems. This design uses the logiCLK Programmable Clock Generator IP core to support different display resolutions.Email: ***********************URL: /logicBRICKS/Reference-logicBRICKS-Design/Graphics-for-Zynq-AP-SoC-ZedBoard.aspxTo check a full list of Xylon reference designs please visit the web:URL: /logicBRICKS/Reference-logicBRICKS-Design.aspxOrdering InformationThis product is available directly from Xylon under the terms of the Xylon’s IP License. Please visit our web shop or contact Xylon for pricing and additional information:Email: *********************URL: This publication has been carefully checked for accuracy. However, Xylon does not assume any responsibility for the contents or use of any product described herein. Xylon reserves the right to make any changes to product without further notice. Our customers should ensure that they take appropriate action so that their use of our products does not infringe upon any patents. Xylon products are not intended for use in the life support applications. Use of the Xylon products in such appliances is prohibited without written Xylon approval.Related InformationXilinx Programmable LogicFor information on Xilinx programmable logic or development system software, contact your local Xilinx sales office, or:Xilinx, Inc.2100 Logic DriveSan Jose, CA 95124Phone: +1 408-559-7778Fax: +1 408-559-7114URL: Revision History。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Design and Implementation of Generics for the.NET Common Language RuntimeAndrew Kennedy Don SymeMicrosoft Research,Cambridge,U.K.AbstractThe Common Language Runtime provides ashared type system,intermediate language and dynamic executionenvironment for the implementation and inter-operation of multiplesource languages.In this paper we extend it with direct support forparametric polymorphism(also known as generics),describing thedesign through examples written in an extended version of the C#programming language,and explaining aspects of implementationby reference to a prototype extension to the runtime.Our design is very expressive,supporting parameterized types,polymorphic static,instance and virtual methods,“F-bounded”type parameters,instantiation at pointer and value types,polymor-phic recursion,and exact run-time types.The implementation takesadvantage of the dynamic nature of the runtime,performing just-in-time type specialization,representation-based code sharing andnovel techniques for efficient creation and use of run-time types.Early performance results are encouraging and suggest that pro-grammers will not need to pay an overhead for using generics,achieving performance almost matching hand-specialized code.1IntroductionParametric polymorphism is a well-established programming lan-guage feature whose advantages over dynamic approaches togeneric programming are well-understood:safety(more bugscaught at compile time),expressivity(more invariants expressed intype signatures),clarity(fewer explicit conversions between datatypes),and efficiency(no need for run-time type checks).Recently there has been a shift away from the traditional com-pile,link and run model of programming towards a more dynamicapproach in which the division between compile-time and run-timebecomes blurred.The two most significant examples of this trendare the Java Virtual Machine[11]and,more recently,the CommonLanguage Runtime(CLR for short)introduced by Microsoft in its.NET initiative[1].The CLR has the ambitious aim of providing a common typesystem and intermediate language for executing programs writtenin a variety of languages,and for facilitating inter-operability be-tween those languages.It relieves compiler writers of the burden ofdealing with low-level machine-specific details,and relieves pro-grammers of the burden of describing the data marshalling(typi-tion of classes in a single-inheritance hierarchy together with Java-style interfaces.Also supported are a collection of primitive types, arrays of specified dimension,structs(structured data that is not boxed,i.e.stored in-line),and safe pointer types for implementing call-by-reference and other indirection-based tricks.Memory safety enforced by types is an important part of the se-curity model of the CLR,and a specified subset of the type system and of IL programs can be guaranteed typesafe by verification rules that are implemented in the runtime.However,in order to support unsafe languages like C++,the instruction set has a well-defined interpretation independent of static checking,and certain types(C-style pointers)and operations(block copy)are never verifiable.IL is not intended to be interpreted;instead,a variety of na-tive code compilation strategies are supported.Frequently-used libraries such as the base class library and GUI frameworks are precompiled to native code in order to reduce start-up er code is typically loaded and compiled on demand by the runtime.1.2Summary of the DesignA summary of the features of our design is as follows:1.Polymorphic declarations.Classes,interfaces,structs,andmethods can each be parameterized on types.2.Runtime types.All objects carry“exact”runtime typeinformation,so one can,for example,distinguish afrom a at runtime,by look-ing at the runtime type associated with an object.3.Unrestricted instantiations.Parameterized types and poly-morphic methods may be instantiated at types which have non-uniform representations,e.g.,and.Moreover,our implementation does not introduce expensive box and unbox coercions.4.Bounded polymorphism.Type parameters may be boundedby a class or interface with possible recursive reference to type parameters(“F-bounded”polymorphism[5]).5.Polymorphic inheritance.The superclass and implementedinterfaces of a class or interface can all be instantiated types.6.Polymorphic recursion.Instance methods on parameterizedtypes can be invoked recursively on instantiations different to that of the receiver;likewise,polymorphic methods can be invoked recursively at new instantiations.7.Polymorphic virtual methods.We allow polymorphic meth-ods to be overridden in subclasses and specified in interfaces and abstract classes.The implementation of polymorphic vir-tual methods is not covered here and will be described in de-tail in a later paper.What are the ramifications of our design choices?Certainly, given these extensions to the CLR,and assuming an existing CLR compiler,it is a relatively simple matter to extend a“regular”class-based language such as C#,Oberon,Java or with the ability to define polymorphic code.Given the complexity of compiling polymorphism efficiently,this is already a great win.We wanted our design to support the polymorphic constructs of as wide a variety of source languages as possible.Of course, attempting to support the diverse mechanisms of the ML family, Haskell,Ada,Modula-3,C++,and Eiffel leads to(a)a lot of fea-tures,and(b)tensions between those features.In the end,it was necessary to make certain compromises.We do not currently sup-port the higher-order types and kinds that feature in Haskell andin encodings of the SML and Caml module systems,nor the typeclass mechanism found in Haskell and Mercury.Neither do wesupport Eiffel’s(type-unsafe)covariant subtyping on type construc-tors,though we are considering a type-safe variance design for thefuture.Finally,we do not attempt to support C++templates in full.Despite these limitations,the mechanisms provided are suf-ficient for many languages.The role of the type system in theCLR is not just to provide runtime support–it is also to facili-tate and encourage language integration,i.e.the treatment of cer-tain constructs in compatible ways by different programming lan-guages.Interoperability gives a strong motivation for implement-ing objects,classes,interfaces and calling conventions in compati-ble ways.The same argument applies to polymorphism and otherlanguage features:by sharing implementation infrastructure,un-necessary incompatibilities between languages and compilers canbe eliminated,and future languages are encouraged to adopt de-signs that are compatible with others,at least to a certain degree.We have chosen a design that removes many of the restrictions pre-viously associated with polymorphism,in particular with respectto how various language features interact.For example,allowingarbitrary type instantiations removes a restriction found in manylanguages.Finally,one by-product of adding parameterized types to theCLR is that many language features not currently supported asprimitives become easier to encode.For example,-ary producttypes can be supported simply by defining a series of parameter-ized types,,etc.1.3Summary of the ImplementationAlmost all previous implementation techniques for parametricpolymorphism have assumed the traditional compile,link and runmodel of programming.Our implementation,on the other hand,takes advantage of the dynamic loading and code generation capa-bilities of the CLR.Its main features are as follows:1.“Just-in-time”type specialization.Instantiations of parame-terized classes are loaded dynamically and the code for theirmethods is generated on demand.2.Code and representation sharing.Where possible,compiledcode and data representations are shared between different in-stantiations.3.No boxing.Due to type specialization the implementationnever needs to box values of primitive type.4.Efficient support of run-time types.The implementationmakes use of a number of novel techniques to provide op-erations on run-time types that are efficient in the presenceof code sharing and with minimal overhead for programs thatmake no use of them.2Polymorphism in Generic C#In this section we show how our support for parametric poly-morphism in the CLR allows a generics mechanism to be added tothe language C#with relative ease.C#will be new to most readers,but as it is by design a deriva-tive of C++it should be straightforward to grasp.The left side ofFigure1presents an example C#program that is a typical use of thegeneric“design pattern”that programmers employ to code aroundthe absence of parametric polymorphism in the language.The type is the top of the class hierarchy and hence serves as a poly-morphic representation.In order to use such a class with primitiveelement types such as integer,however,it is necessary to box the 2Object-based stackFigure2:A parameterized interface and implementation parameterized class or built-in type constructor such as arrays,as in the following example:Here and are polymorphic static methods defined within the class,which in the CLR is a super-type of all built-in array types and is thus a convenient place to locate opera-tions common to all arrays.The methods can be called as follows: The type parameters(in this case,)can be inferred in most cases arising in practice[4],allowing us here to write the more concise.2.4Defining parameterized typesThe previous examples have shown the use of parameterized types and methods,though not their declaration.We have presented this first because in a multi-language framework not all languages need support polymorphic declarations–for example,Scheme or Visual Basic might simply allow the use of parameterized types and poly-morphic methods defined in other languages.However,Generic C# does allow their definition,as we now illustrate.We begin with parameterized classes.We can now complete a Generic C#definition of,shown on the right of Figure1for easy comparison.The type parameters of a parameterized class can appear in any instance declaration:here,in the type of the private field and in the type signatures and bodies of the instance methods and and constructor.C#supports the notion of an interface which gives a name to a set of methods that can be implemented by many different classes. Figure2presents two examples of parameterized interfaces and a parameterized class that implements one of them.A class canFigure3:Polymorphic methodsalso implement an interface at a single instantiation:for example,might use a specialized bit-vector repre-sentation for sets of characters.Also supported are user-defined“struct”types,i.e.values repre-sented as inline sequences of bits rather than allocated as objects on the heap.A parameterized struct simply defines a family of struct types:Finally,C#supports a notion offirst-class methods,called dele-gates,and these too can be parameterized on types in our extension.They introduce no new challenges to the underlying CLR execution mechanisms and will not be discussed further here.2.5Defining polymorphic methodsA polymorphic method declaration defines a method that takes typeparameters in addition to its normal value parameters.Figure3 gives the definition of the method used earlier.It is also possible to define polymorphic instance methods that make use of type parameters from the class as well as from the method,as with the cartesian product method shown here.3Support for Polymorphism in ILThe intermediate language of the CLR,called IL for short,is best introduced through an example.The left of Figure4shows the IL for the non-parametric Stack implementation of Figure1.It should be apparent that there is a direct correspondence between C#and IL:the code has been linearized,with the stack used to pass ar-guments to methods and for expression evaluation.Argument0is reserved for the object,with the remainder numbered from1 upwards.Field and method access instructions are annotated with explicit,fully qualifiedfield references and method references.A call to the constructor for the class has been inserted at the start of the constructor for the class,and types are de-scribed slightly more explicitly,e.g.in-stead of the C#.Finally,and instructions have been inserted to convert back and forth between the primitivetype and.4Object-based stackThe right of Figure4shows the IL for the parametric Stack im-plementation on the right of Figure1.For comparison with the non-generic IL the differences are underlined.In brief,our changes to IL involved(a)adding some new types to the IL type system,(b)in-troducing polymorphic forms of the IL declarations for classes,in-terfaces,structs and methods,along with ways of referencing them, and(c)specifying some new instructions and generalizations of ex-isting instructions.We begin with the instruction set.3.1Polymorphism in instructionsObserve from the left side of Figure4that:Some IL instructions are implicitly generic in the sense that they work over many types.For example,(in) loads thefirst argument to a method onto the stack.The JIT compiler determines types automatically and generates code appropriate to the type.Contrast this with the JVM,which has instruction variants for different types(e.g.for32-bit integers and for pointers).Other IL instructions are generic(there’s only one variant)but are followed by further information.This is required by the verifier,for overloading resolution,and sometimes for code generation.Examples include forfield access,and for array creation.A small number of IL instructions do come in different vari-ants for different types.Here we see the use ofand for assignment to arrays of object types.Separate instructions must be used for primitive types,for ex-ample,and for32-bit signed integer arrays.Now compare the polymorphic IL on the right of Figure4.The generic,type-less instructions remain the same.The annotated generic instructions have types that involveand instead of and.No-tice how type parameters are referenced by number.Two new generic instructions have been used for array access and update:and.Two instructions deserve special attention:and a new instruc-tion.The instruction is followed by a value type and,given a value of this type on the stack,boxes it to produce a heap-allocated value of type.We generalize this instruc-tion to accept reference types in which case the instruction acts as a no-op.We introduce a new instruction which performs the converse operation including a runtime type-check.These re-finements to boxing are particularly useful when interfacing to code that uses the idiom for generic programming,as a value of type can safely be converted to and from.Finally,we also generalize some existing instructions that are currently limited to work over only non-reference types.For ex-ample,the instructions that manipulate pointers to values(, ,and)are generalized to accept pointers to references and pointers to values of variable type.3.2Polymorphic forms of declarationsWe extend IL class declarations to include named formal type pa-rameters.The names are optional and are only for use by compilers and other tools.The and clauses of class definitions are extended so that they can specify instantiated types.Interface,structure and method definitions are extended in a similar way.At the level of IL,the signature of a polymorphic method declaration looks much the same as in Generic C#.Here isa simple example:We distinguish between class and method type variables,the latter being written in IL assembly language as.3.3New typesWe add three new ways of forming types to those supported by the CLR:1.Instantiated types,formed by specifying a parameterized typename(class,interface or struct)and a sequence of type spec-ifications for the type parameters.2.Class type variables,numbered from left-to-right in the rele-vant parameterized class declaration.3.Method type variables,numbered from left-to-right in the rel-evant polymorphic method declaration.Class type variables can be used as types within any instance decla-ration of a class.This includes the type signatures of instancefields, and in the argument types,local variable types and instructions of instance methods within the parameterized class.They may also be used in the specification of the superclass and implemented inter-faces of the parameterized class.Method type parameters can ap-pear anywhere in the signature and body of a polymorphic method.3.4Field and method referencesMany IL instructions must refer to classes,interfaces,structs,fields and methods.When instructions such as and re-fer tofields and methods in parameterized classes,we insist that the type instantiation of the class is specified.The signature(field type or argument and result types for methods)must be exactly that of the definition and hence include formal type parameters.The actual types can then be obtained by substituting through with the instantiation.This use of formal signatures may appear surprising, but it allows the execution engine to resolvefield and method refer-ences more quickly and to discriminate between certain signatures that would become equal after instantiation.References to polymorphic methods follow a similar pattern.An invocation of a polymorphic method is shown below:Again,the full type instantiation is given,this time after the name of the method,so both a class and method type instantiation can be specified.The types of the arguments and result again must match the definition and typically contain formal method type parameters.The actual types can then be obtained by substituting through by the method and class type instantiations.3.5RestrictionsThere are some restrictions:is not allowed,i.e.naked type variables may not be used to specify the superclass or imple-mented interfaces of a class.It is not possible to determinethe methods of such a class at the point of definition of such 6a class,a property that is both undesirable for programming(whether a method was overridden or inherited could depend on the instantiation of the class)and difficult to implement(a conventional vtable cannot be created when the class isloaded).Constraints on type parameters(“where clauses”) could provide a more principled solution and this is under consideration for a future extension.An instruction such as is out-lawed,as is.Again,in the absence of any other information about the type parameter,it is not possible to check at the point of definition of the enclos-ing class that the class represented by has the appropriate constructor or static method.Class type parameters may not be used in static declara-tions.For static methods,there is a workaround:simply re-parameterize the method on all the class type parameters.For fields,we are considering“per-instantiation”staticfields as a future extension.A class is not permitted to implement a parameterized inter-face at more than one instantiation.Aside from some tricky design choices over resolving ambiguity,currently it is diffi-cult to implement this feature without impacting the perfor-mance of all invocations of interface methods.Again,this feature is under consideration as a possible extension.4ImplementationThe implementation of parametric polymorphism in programming languages has traditionally followed one of two routes:Representation and code specialization.Each distinct instan-tiation of a polymorphic declaration gives rise to data repre-sentation and code specific to that instantiation.For example, C++templates are typically specialized at link-time.Alterna-tively,polymorphic declarations can be specialized with re-spect to representation rather than source language type[3].The advantage of specialization is performance,and the rel-ative ease of implementing of a richer feature set;the draw-backs are code explosion,lack of true separate compilation and the lack of dynamic linking.Representation and code sharing.A single representation is used for all instantiations of a parameterized type,and polymorphic code is compiled just once.Typically it is a pointer that is the single representation.This is achieved ei-ther by restricting instantiations to the pointer types of the source language(GJ,NextGen,Eiffel,Modula-3),by box-ing all non-pointer values regardless of whether they are used polymorphically or not(Haskell)or by using a tagged rep-resentation scheme that allows some unboxed values to be manipulated polymorphically(most implementations of ML).Clearly there are benefits in code size(although extra box and unbox operations are required)but performance suffers. Recent research has attempted to reduce the cost of using uniform representations through more sophisticated boxing strategies[10] and run-time analysis of types[9].In our CLR implementation,we have the great advantage over conventional native-code compilers that loading and compilation is performed on demand.This means we can choose to mix-and-match specialization and sharing.In fact,we could throw in a bit of boxing too(to share more code)but have so far chosen not to do this on grounds of simplicity and performance.4.1Specialization and sharingOur scheme runs roughly as follows:When the runtime requires a particular instantiation of a pa-rameterized class,the loader checks to see if the instantia-tion is compatible with any that it has seen before;if not,then afield layout is determined and new vtable is created,tobe shared between all compatible instantiations.The itemsin this vtable are entry stubs for the methods of the class.When these stubs are later invoked,they will generate(“just-in-time”)code to be shared for all compatible instantiations.When compiling the invocation of a(non-virtual)polymor-phic method at a particular instantiation,wefirst check to seeif we have compiled such a call before for some compatibleinstantiation;if not,then an entry stub is generated,whichwill in turn generate code to be shared for all compatible in-stantiations.Two instantiations are compatible if for any parameterized class its compilation at these instantiations gives rise to identical code and other execution structures(e.g.field layout and GC tables),apart from the dictionaries described below in Section4.4.In particu-lar,all reference types are compatible with each other,because the loader and JIT compiler make no distinction for the purposes of field layout or code generation.On the implementation for the In-tel x86,at least,primitive types are mutually incompatible,even if they have the same size(floats and ints have different parameter passing conventions).That leaves user-defined struct types,which are compatible if their layout is the same with respect to garbage collection i.e.they share the same pattern of traced pointers.This dynamic approach to specialization has advantages overa static approach:some polymorphism simply cannot be special-ized statically(polymorphic recursion,first-class polymorphism), and lazy specialization avoids wasting time and space in generating specialized code that never gets executed.However,not seeing the whole program has one drawback:we do not know ahead of time the full set of instantiations of a polymorphic definition.It turns out that if we know that the code for a particular instantiation will not be shared with any other instantiation then we can sometimes generate slightly better code(see 4.4).At present,we use a global scheme,generating unshared code for primitive instantiations and possibly-shared code for the rest.The greatest challenge has been to support exact run-time types and at the same time share representations and code as much as possible.There’s a fundamental conflict between these features: on the one hand,sharing appears to imply no distinction between instantiations but on the other hand run-time types require it.4.2Object representationObjects in the CLR’s garbage-collected heap are represented by a vtable pointer followed by the object’s contents(e.g.fields or array elements).The vtable’s main role is virtual method dispatch:it contains a code pointer for each method that is defined or inherited by the object’s class.But for simple class types,at least,where there is a one-to-one correspondence between vtables and classes, it can also be used to represent the object’s type.When the vtable is used in this way we call it the type’s type handle.In an implementation of polymorphism based on full special-ization,the notion of exact run-time type comes for free as dif-ferent instantiations of the same parameterized type have different vtables.But now suppose that code is shared between different in-stantiations such as and.The vta-bles for the two instantiations will be identical,so we need some7........................Figure 5:Alternative implementations of run-time types for objectsof typeand way of representing the instantiation at run-time.There are a num-ber of possible techniques:1.Store the instantiation in the object itself.Either(a)inline;or(b)via an indirection to a hash-consed instantiation.2.Replace the vtable pointer by a pointer to a combined vtable-and-instantiation structure.Either(a)share the original vtable via an indirection;or (b)duplicate it per instantiation.Figure 5visualizes the alternatives,and the space and time impli-cations of each design choice are presented in Figure 6.The times presented are the number of indirections from an object required to access the vtable and instantiation.In our implementation we chose technique 2(b)becausePolymorphic objects are often small (think of list cells)and even one extra word per object is too great a cost.We expect the number of instantiations to be small compared to the number of objects,so duplicating the vtable should not be a significant hit.TechniqueTime (indirections)per-objectper-inst1(a)101(b)112(a)222(b)12Figure 6:Time/space trade-offs for run-time types (=number of type parameters and =size of vtable)Figure 7:Run-time types exampleRun-time types are typically not accessed very frequently,sopaying the cost of an indirection or two is not a problem.Virtual method calls are extremely frequent in object-oriented code and in typical translations of higher-order code and we don’t want to pay the cost of an extra indirection each time.Also,technique 2(a)is fundamentally incompatible with the current virtual method dispatch mechanism.It should be emphasised that other implementations might makeanother choice –there’s nothing in our design that forces it.4.3Accessing class type parameters at run-timeWe now consider how runtime type information is accessed within polymorphic code in the circumstances when IL instructions de-mand it.For example,this occurs at any instruction that involves constructing an object with a type that includes a type pa-rameter ,or a instruction that attempts to check if anobject has typefor an in-scope type parameter .Con-sider themethod from the stack class in Figure 1.It makes use of its type parameter in an operation (array creation)that must construct an exact run-time type.In a fully-specialized implemen-tation,the type would have been instantiated at JIT-compile time,but when code is shared the instantiation for is not known until run-time.One natural solution is to pass type instantiations at run-time to all methods within polymorphic code.But apart from the possible costs of passing extra parameters,there is a problem:subclassing allows type parameters to be “revealed”when virtual methods are called.Consider the class hierarchy shown in Figure 7.The call-ing conventions for and must agree,as an object that is statically known to be of type might dynamically be some instan-tiation of .But now observe that inside methodwe have access to an instance of class–namely –which has exact type infor-mation available at runtime and hence the precise instantiation of .Class type parameters can then always be accessed via the pointer for the current method.Method might be invoked on an object whose type is a sub-class of an instantiation of –such as which is a sub-class of –in which it is inherited.Therefore8。