Exact Matching Condition for Matrix Elements in Lattice and $overline{MS}$ Schemes

合集下载

用于宽带频谱感知的全盲亚奈奎斯特采样方法.

用于宽带频谱感知的全盲亚奈奎斯特采样方法.

第34卷第2期电子与信息学报Vol.34No.2 2012年2月 Journal of Electronics & Information Technology Feb. 2012用于宽带频谱感知的全盲亚奈奎斯特采样方法盖建新①②付平*①乔家庆①孟升卫①③①(哈尔滨工业大学自动化测试与控制系哈尔滨 150080)②(哈尔滨理工大学测控技术与仪器黑龙江省高校重点实验室哈尔滨 150080)③(中国科学院电子学研究所北京 100190)摘要:亚奈奎斯特采样方法是缓解宽带频谱感知技术中采样率过高压力的有效途径。

该文针对现有亚奈奎斯特采样方法所需测量矩阵维数过大且重构阶段需要确切稀疏度的问题,提出了将测量矩阵较小的调制宽带转换器(MWC)应用于宽带频谱感知的方法。

在重新定义频谱稀疏信号模型的基础上,提出了一个改进的盲谱重构充分条件,消除了构建MWC系统对最大频带宽度的依赖;在重构阶段,将稀疏度自适应匹配追踪(SAMP)算法引入到多测量向量(MMV)问题的求解中。

最终实现了既不需要预知最大频带宽度也不需要确切频带数量的全盲低速采样,实验结果验证了该方法的有效性。

关键词:宽带频谱感知;亚奈奎斯特采样;多测量向量;稀疏度自适应匹配追踪中图分类号:TN911.72 文献标识码:A 文章编号:1009-5896(2012)02-0361-07 DOI: 10.3724/SP.J.1146.2011.00314A Full-blind Sub-Nyquist Sampling Methodfor Wideband Spectrum SensingGai Jian-xin①②Fu Ping① Qiao Jia-qing① Meng Sheng-wei①③①(Department of Automatic Test and Control, Harbin Institute of Technology, Harbin 150080, China)②(The Higher Educational Key Laboratory for Measuring & Control Technology and Instrumentations of Heilongjiang Province,Harbin University of Science and Technology, Harbin 150080, China)③(Institute of Electronics, Chinese Academy of Sciences, Beijing 100190, China)Abstract: Sub-Nyquist sampling is an effective approach to mitigate the high sampling rate pressure for wideband spectrum sensing. The existing sub-Nyquist sampling method requires excessive large measurement matrix and exact sparsity level in recovery phase. Considering this problem, a method of applying Modulated Wideband Converter (MWC) with small measurement matrix to wideband spectrum sensing is proposed. An improved sufficient condition for spectrum-blind recovery based on the redefinition of spectrum sparse signal model is presented, which breaks the dependence on the maximum width of bands for MWC construction. In recovery phase, the Sparsity Adaptive Matching Pursuit (SAMP) algorithm is introduced to Multiple Measurement Vector (MMV) problem. As a result, a full-blind low rate sampling method requiring neither the maximum width nor the exact number of bands is implemented. The experimental results verify the effectiveness of the proposed method.Key words: Wideband spectrum sensing; Sub-Nyquist sampling; Multiple Measurement Vectors (MMV); Sparsity Adaptive Matching Pursuit (SAMP)1引言认知无线电通过感知周围频谱环境自主发现“频谱空穴”并对其进行有效利用,在解决无线通信中频谱资源紧张、频谱利用率低等问题上表现出巨大的优势。

欧盟GMP指南中文版

欧盟GMP指南中文版

QbR Frequently Asked QuestionsDisclaimer: These are general answers and may not be applicable to every product. Each ANDA is reviewed individually. This document represents the Office of Generic Drugs’s (OGD’s) current thinking on these topics.Format and SubmissionHow should QbR ANDAs be submitted?OGD’s QbR was designed with the expectation that ANDA applications would beorganized according to the Common Technical Document (CTD) format, a submissionformat adopted by multiple regulatory bodies including FDA. Generic firms are strongly recommended to submit their ANDAs in the CTD format (either eCTD or paper) tofacilitate implementation of the QbR. The ANDA Checklist for completeness andacceptability of an application for filing can be found on the OGD web page:/cder/ogd/anda_checklist.pdf .What is a QOS?The Quality Overall Summary (QOS) is the part of the CTD format that provides asummary of the CMC aspects of the application. It is an important tool to make the QbR review process more efficient.How long should a QOS be?OGD believes the CTD guidance1 recommendation of 40 pages to be an appropriatecompromise between level of detail and concision. The CTD guidance recommendation does not include tables and figures.The same information should not be included in multiple locations in the QOS. Instead of repeating information, refer to the first location of the original information in the QOS by CTD section number.Should the QOS be submitted electronically?All applications should include an electronic QOS. For paper submissions, it isrecommended that both an electronic QOS and a paper QOS be included.What file format should be used for the QOS?All applications, both eCTD and paper submissions, should have an electronic QOS. The electronic QOS should be contained in one document. Do not provide separate files foreach section or question.The electronic QOS should be provided as both a pdf and a Microsoft Word file. Microsoft Word files should be readable by Word 2003.1 Guidance for Industry M4Q: The CTD – Quality (August 2001) /cder/guidance/4539Q.htmWhat fonts should be used in the QOS?Because of FDA’s internal data management systems, please use only use these TrueType fonts: Times New Roman, Arial, Courier New. Times New Roman is recommended as the main text font.Should the applicable QbR question be presented within the body of Module 2 of the relevant section, followed by sponsor's answer?Yes, include all the QbR questions without deletion in the QOS.Can the granularity of module 3 be used in module 2?Yes, the granularity can be used for section and subsection headings. However, the QOS should always be submitted as a single file.Can color be used in the QOS?Yes, but sponsors should ensure that the QOS is legible when printed in black and white.Colored text should not be used.Is the QOS format available on OGD webpage and questions therein mandatory to be followed?For an efficient review process, OGD desires all applications to be in a consistent format.See the OGD QbR questions and example QOS:/cder/ogd/QbR-Quality_Overall_Summary_Outline.doc/cder/ogd/OGD_Model_Quality_Overall_ Summary.pdf/cder/ogd/OGD_Model_QOS_IR_Product.pdfFor amendments to applications, should the documentation consist of a revision of the QOS? Would new PD reports be required?The QOS should not be updated after submission of the original ANDA. Any additional data (including any new PD reports) should be provided as a stand alone amendment.Responses to deficiencies should be provided in electronic format as both a pdf andMicrosoft Word file.After January 2007, what will happen to an application that does not have a QOS or contains an incomplete QOS?OGD will contact the sponsor and ask them to provide a QOS. If the sponsor provides the QOS before the application comes up for review, OGD will use the sponsor’s QOS.OGD’s QbR questions represent the current thinking about what information is essential to evaluate an ANDA. Reviewers will use deficiency letters to ask ANDA sponsors thequestions that are not answered in the sponsor’s QOS.In February 2007, 75% of ANDAs submitted contained a QOS.If a question is not applicable to a specific formulation or dosage form, should the question be deleted or unanswered?Sponsors should never delete a QbR question, but instead answer as not applicable, with a brief justification. Please answer all parts of multi-part questions.For sterile injectables, to what extent should sterility assurance be covered in QOS?The current QbR was not intended to cover data recommendations for Sterility Assurance information. In the future, other summaries will cover other disciplines.MAPP 5040.1, effective date 5/24/04, specifies location of the microbiology information in the CTD format.Where in the CTD should an applicant provide comparative dissolution data between the generic and RLD?The comparison between the final ANDA formulation and the RLD should be provided in5.3.1, this comparison should be summarized in the QOS. Comparisons with otherformulations conducted during development should be included in 3.P.2.Is it possible to submit an amendment in CTD format for a product that was already submitted in the old ANDA format?No, all amendments to an application under review should use the same format as theoriginal submission.How is a paper CTD to be paginated?“Page numbering in the CTD format should be at the document level and not at the volume or module level. (The entire submission should never be numbered consecutively by page.) In general, all documents should have page numbers. Since the page numbering is at the document level, there should only be one set of page numbers for each document.”2. For paper submission, tabs locating sections and subsections are useful.For the ANDA submitted as in paper CTD format, can we submit the bioequivalence study report electronically? Or does the Agency require paper copy only?The bioequivalence summary tables should always be provided in electronic format.Will QbR lead to longer review times?Many of the current long review times result from applications that do not completelyaddress all of the review issues and OGD must request additional information through the deficiency process. This iterative process will be reduced with the use of the QbR template.Sponsors that provide a QOS that clearly and completely addresses all the questions in the QbR should find a reduction in the overall review time.Will DMFs for the drug substance be required to be in CTD if the ANDA is in CTD format?No. CTD format DMFs are recommended.What should be included in 3.2.R.1.P.2, Information on Components?COA’s for drug substance, excipients and packaging components used to produce theexhibit batch.2 Submitting Marketing Applications According to the ICH/CTD Format: General Considerations/cder/guidance/4707dft.pdfHow should an ANDA sponsor respond to deficiencies?OGD requests that sponsors provide a copy of the response to deficiencies in electronic format as both a pdf file and a Microsoft Word file.QUALITY OVERALL SUMMARY CONTENT QUESTIONS2.3 Introduction to the Quality Overall SummaryWhat information should be provided in the introduction?Proprietary Name of Drug Product:Non-Proprietary Name of Drug Product:Non-Proprietary Name of Drug Substance:Company Name:Dosage Form:Strength(s):Route of Administration:Proposed Indication(s):Maximum Daily Dose:2.3.S DRUG SUBSTANCEWhat if an ANDA contains two or more active ingredients?Prepare separate 2.3.S sections of the QOS for each API. Label them 2.3.S [API 1] and2.3.S [API 2].What if an ANDA contains two or more suppliers of the same active ingredient?Provide one 2.3.S section. Information that is common between suppliers should not be repeated. Information that is not common between suppliers (e.g. different manufacturing processes) should have separate sections and be labeled accordingly (drug substance,manufacturer 1) and (drug substance, manufacturer 2).Can information in this section be provided by reference to a DMF?See individual questions for details. As a general overview:•Information to be referenced to the DMFo Drug substance structure elucidation;o Drug substance manufacturing process and controls;o Container/closure system used for packaging and storage of the drugsubstance;o Drug substance stability.•Information requested from ANDA Sponsoro Physicochemical properties;o Adequate drug substance specification and test methods including structure confirmation;o Impurity profile in drug substance (process impurity or degradant);o Limits for impurity/residual solvent limits;o Method validation/verification;o Reference standard.2.3.S.1 General InformationWhat are the nomenclature, molecular structure, molecular formula, and molecular weight?What format should be used for this information?Chemical Name:CAS #:USAN:Molecular Structure:Molecular Formula:Molecular Weight:What are the physicochemical properties including physical description, pKa, polymorphism, aqueous solubility (as function of pH), hygroscopicity, melting point, and partition coefficient?What format should be used for this information?Physical Description:pKa:Polymorphism:Solubility Characteristics:Hygroscopicity:Melting Point:Partition Coefficient:Should all of these properties be reported? Even if they are not critical?Report ALL physicochemical properties listed in the question even if they are not critical.If a property is not quantified, explain why, for example: “No pKa because there are no ionizable groups in the chemical structure” or “No melting point because compounddegrades on heating”.What solubility data should be provided?The BCS solubility classification3 of the drug substance should be determined for oral dosage forms.Report aqueous solubility as a function of pH at 37º C in tabular form. Provide actualvalues for the solubility and not descriptive phrases such as “slightly soluble”.3 See BCS guidance /cder/guidance/3618fnl.pdf for definitionSolvent Media and pH Solubility Form I(mg/ml) Solubility Form II(mg/ml)Should pH-solubility profiles be provided for all known polymorphic forms?No, it is essential that the pH-solubility profile be provided for the form present in the drug product. The relative solubility (at one pH) should be provided for any other more stable forms.Physicochemical information such as polymorphic form, pKa, solubility, is usually in the confidential section of DMF. Is reference to a DMF acceptable for this type of information?No, knowledge of API physicochemical properties is crucial to the successful development of a robust formulation and manufacturing process. In view of the critical nature of thisinformation, OGD does not consider simple reference to the DMF to be acceptable.The Guidance for Industry: M4Q: The CTD-Quality Questions and Answers/ Location Issues says only the polymorphic form used in the drug product should be described in S.1 and other known polymorphic forms should be described in S.3. OGD’s examples placed information about all known polymorphic forms in S.1. Where does OGD want this information?This information may be included in either S.1 or in S.3. Wherever presented, list allpolymorphic forms reported in literature and provide brief discussion (i.e., which one is the most stable form) and indicate which form is used for this product.Other polymorph information should be presented by the ANDA applicant as follows: • 2.3.S.3 Characterization: Studies performed (if any) and methods used to identify the potential polymorphic forms of the drug substance. (x-ray, DSC, and literature) • 2.3.S.4 Specification: Justification of whether a polymorph specification is needed and the proposed analytical method• 2.3.P.2.1.1 Pharmaceutical Development –Drug Substance: Studies conducted to evaluate if polymorphic form affects drug product propertiesWhy does OGD need to know the partition coefficient and other physicochemical properties?Physical and chemical properties may affect drug product development, manufacture, or performance.2.3.S.2 ManufactureWho manufactures the drug substance?How should this be answered?Provide the name, address, and responsibility of each manufacturer, including contractor, and each proposed production site or facility involved in manufacturing and testing.Include the DMF number, refer to the Letter of Authorization in the body of data, andidentify the US Agent (if applicable)How do the manufacturing processes and controls ensure consistent production of the drug substance?Can this question be answered by reference to a DMF?Yes. It is preferable to mention the source of the material (synthetic or natural) when both sources are available.The DMF holder’s COA for the batch used to manufacture the exhibit batches should be provided in the body of data at 3.2.S.4.4.If there is no DMF, what information should be provided?A complete description of the manufacturing process and controls used to produce the drugsubstance.2.3.S.3 CharacterizationHow was the drug substance structure elucidated and characterized?Can structure elucidation be answered by reference to a DMF?Yes.What information should be provided for chiral drug substances?When the drug substance contains one or more chiral centers, the applicant should indicate whether it is a racemate or a specific enantiomer.When the drug substance is a specific enantiomer, then tests to identify and/or quantify that enantiomer should be included. Discussion of chirality should include the potential forinterconversion between enantiomers (e.g. racemization/epimerization).How were potential impurities identified and characterized?List related compounds potentially present in the drug substance. Identify impurities bynames, structures, or RRT/HPLC. Under origin, classify impurities as process impurities and/or degradants.Structure Origin ID ChemicalName[SpecifiedImpurity]Is identification of potential impurities needed if there is a USP related substances method?Yes.Can this question be answered by reference to a DMF?The ANDA should include a list of potential impurities and their origins. The methodsused to identify and characterize these impurities can be incorporated by reference to the DMF.According to the CTD guidance, section S.3 should contain a list of potential impurities and the basis for the acceptance criteria for impurities, however in the OGD examples this information was in section S.4. Where should it go?This information may be included in either S.3 or in S.4.2.3.S.4 Control of Drug SubstanceWhat is the drug substance specification? Does it include all the critical drug substance attributes that affect the manufacturing and quality of the drug product?What format should be used for presenting the specification?Include a table of specifications. Include the results for the batch(es) of drug substance used to produce the exhibit batch(es). Identify impurities in a footnote. Test results and acceptance criteria should be provided as numerical values with proper units whenapplicable.Tests Acceptancecriteria AnalyticalprocedureTest results for Lot#AppearanceIdentificationA:B:AssayResidualSolventsSpecified ImpuritiesRC1RC2RC3Any UnspecifiedImpurityTotal Impurities[AdditionalSpecification]*RC 1: [impurity identity]RC 2: [impurity identity]RC 3: [impurity identity]What tests should be included in the drug substance specification?USP drugs must meet the USP monograph requirements, but other tests should be included when appropriate. For USP and non USP drugs, other references (EP, BP, JP, the DMF holder’s specifications, and ICH guidances) can be used to help identify appropriate tests.Only relevant tests should be included in the specification. Justify whether specific tests such as optical rotation, water content, impurities, residual solvents; solid state properties(e.g. polymorphic form, particle size distribution, etc) should be included in thespecification of drug substance or not.Does OGD accept foreign pharmacopeia tests and criteria for drug substances?There are several examples where a drug substance is covered by a monograph in EP or JP, but not in the USP. ANDA and DMF holders can obtain information regardingphysicochemical properties, structure of related impurities, storage conditions, analytical test methods, and reference standards from EP or JP to support their submission to OGD.Although the USP remains our official compendium, we usually accept EP when the drug substance is not in USP (However, a complete validation report for EP methods should be provided in the ANDA).For each test in the specification, is the analytical method(s) suitable for its intended use and, if necessary, validated? What is the justification for the acceptance criterion?What level of detail does OGD expect for the analytical method justifications and validations?Provide a summary of each non-USP method. This can be in a tabular or descriptive form.It should include the critical parameters for the method and system suitability criteria ifapplicable. See an example in section 2.3.P.5 of this document.For each analytical procedure, provide a page number/link to the location of validationinformation in Module 3. For quantitative non-compendial analytical methods, provide a summary table for the method validation. See an example in section 2.3.P.5 of thisdocument.Is validation needed for a USP method?No, but USP methods should be verified and an ANDA sponsor should ensure that theUSP assay method is specific (e.g. main peak can be separated from all process impurities arising from their manufacturing process and from degradation products) and the USPrelated substance method is specific (e.g. all the process impurities and degradants can be separated from each other and also separated from main peak).Is validation needed if the USP method is modified or replaced by an in-house method?Yes. Data supporting the equivalence or superiority of the in-house method should beprovided. In case of a dispute, the USP method will be considered the official method.Is reference to the DMF for drug substance analytical method validations acceptable?No. ANDA sponsors need to either provide full validation reports from the ANDA holder or reference full validation reports from the DMF holder (provided there is a copy of the method validation report in the ANDA and method verification from the ANDA holder). AppearanceIdentityAssayImpurities (Organic impurities)What format should be used for related substances?List related compounds potentially present in the drug substance. (Either here or S.3)Name Structure Origin[SpecifiedImpurity]Provide batch results and justifications for the proposed acceptance criteria. See guidance on ANDA DS impurities4 for acceptable justifications. If the DS is compendial, include the USP limits in the table. If the RLD product is used for justification/qualification, then its results should also be included. If an ICH justification is used, then the calculation of the ICH limits should be explained.To use the ICH limits, determine the Maximum Daily Dose (MDD) indicated in the label and use it to calculate the ICH Thresholds: Reporting Threshold (RT), Identification1. The amount of drug substance administered per day2. Higher reporting thresholds should be scientifically justified3. Lower thresholds can be appropriate if the impurity is unusually toxicSponsors can use the ICH limits to ensure the LOQ for the analytical method is equal or below the RT, establish the limit for “Any Unspecified Impurity” to equal or below the IT, and establish limits for each “Specified Identified Impurity” and each “SpecifiedUnidentified Impurity”5 to equal or below the QT.An impurity must be qualified if a limit is established above the QT. Options forqualification include reference to the specific impurity listed in a USP monograph,comparison to the RLD product, identifying the impurity as a significant metabolite of the drug substance, literature references including other compendial monographs (EP, BP, JP), or conducting a toxicity study.4/cder/guidance/6422dft.pdf5 The ANDA DS guidance states “For unidentified impurities to be listed in the drug substance specification, we recommend that you clearly state the procedure used and assumptions made in establishing the level of the impurity. It is important that unidentified specified impurities be referred to by an appropriate qualitative analytical descriptive label (e.g., unidentified A, unidentified with relative retention of 0.9)”. Q3A(R) states “When identification of an impurity is not feasible, a summary of the laboratory studies demonstrating the unsuccessful effort should be included in the application.”Name DrugSubstance(Lot #)USPLimit forDrugSubstanceRLDDrugProduct(Lot #)ProposedAcceptancecriteriaJustification[Specified Impurity,Identified][BatchResults][BatchResults][SpecifiedImpurity,Unidentified]Any UnspecifiedImpurityTotalImpuritiesInclude the column for RLD drug product only if that data is used to justify the drugsubstance limit (example a process impurity that is also found in the RLD).What is OGD’s policy on genotoxic impurities?FDA is developing a guidance for genotoxic impurities. According to the ICH Q3A lower thresholds are appropriate for impurities that are unusally toxic.If impurities levels for an approved generic drug are higher than the RLD, can the approved generic drug data be used as justification for a higher impurity specification?According to ANDA DP and DS Impurity guidances, any approved drug product can be used to qualify an impurity level. However, the guidances qualify this by later stating “This approved human drug product is generally the reference listed drug (RLD). However, you may also compare the profile to a different drug product with the same route ofadministration and similar characteristics (e.g. tablet versus capsule) if samples of thereference listed drug are unavailable or in the case of an ANDA submitted pursuant to a suitability petition.”What if there are no impurities’ tests found in the USP monograph for a USP drug substance? What should the ANDA sponsor do?Please work with your supplier (DMF Holder) to ensure that potential synthetic process impurities (e.g. isomers (if any), side reaction products), degradation impurities, metalcatalysts, and residual solvents are adequately captured by your impurities test method.There may be information available in published literature as well, regarding potentialimpurities.Can levels of an impurity found in the RLD and identified by RRT be used for qualification?Qualification of a specified unidentified impurity by means of comparative RRT, UVspectra, and mass spectrometry with the RLD may be acceptable. However, the ANDAsponsor should make every attempt to identify the impurity.If levels are higher than in an approved drug product then the sponsor should provide data for qualification of the safety of this impurity at this level.Can a limit from a USP monograph for “any unspecified impurity” be used to justify a limit for “any unspecified impurity” greater than the ICH Q3 identification threshold?No. Any unspecified impurity (any unknown) limit should not exceed ICH Q3A “IT”based on MDD. Non-specific compendial acceptance criteria (e.g. Any Individual Impurity is NMT 0.5%) should not be used for justification of proposed impurity acceptance criteria.However, if the USP limit is less than the ICH threshold, then the USP limit should be used. Can a limit for an identified impurity in the drug substance be qualified with data obtained from RLD drug product samples treated under the stressed conditions?No. Test various samples of marketed drug product over the span of its shelf life (ideally, near the end of shelf-life). Data generated from accelerated or stressed studies of the RLD is considered inappropriate.Impurities (Residual Solvents)Will OGD base residual solvent acceptance limits on ICH limits or process capability?The ICH guidance on residual solvents6 provides safety limits for residual solvents but also indicates that “residual solvents should be removed to the extent possible”. ANDA residual solvent limits should be within the ICH safety limits, but the review of the ANDA includes both of these considerations.OGD generally accepts the ICH limits when they are applied to the drug product.What about solvents that are not listed in Q3C?Levels should be qualified for safety.Impurities (Inorganic impurities)Polymorphic FormWhen is a specification on polymorphic form necessary?See ANDA polymorphism guidance7 for a detailed discussion.Particle SizeWhen is a drug substance particle size specification necessary?A specification should be included when the particle size is critical to either drug productperformance or manufacturing.For example, in a dry blending process, the particle size distribution of the drug substance and excipients may affect the mixing process. For a low solubility drug, the drug substance particle size may have a critical impact on the dissolution of the drug product. For a high solubility drug, particle size is often not critical to product performance.6 /cder/guidance/Q3Cfnl.pdf7/cder/guidance/6154dft.pdfWhat justification is necessary for drug substance particle size specifications?As for other API properties, the specificity and range of acceptance criteria for particle size, and the justification thereof, could vary from none to very tight limits, depending upon the criticality of this property for that drug product.Particle size specifications should be justified based on whether a change in particle sizewill affect the ability to manufacture the product or the final product performance.In general, a sponsor either should demonstrate through mechanistic understanding orempirical experiments how changes in material characteristics such as particle size affecttheir product.In the absence of pharmaceutical development studies, the particle size specificationshould represent the material used to produce the exhibit batch.When should the particle size be specified as distribution [d90,d50,d10] and when is a single point limit appropriate?When critical, a particle size should be specified by the distribution. There may be othersituations when a single point limit can be justified by pharmaceutical development studies.2.3.S.5 Reference StandardsHow were the primary reference standards certified?For non-compendial, in-house reference standards, what type of qualification data is recommended? Will a COA be sufficient?COA should be included in Module 3, along with details of its preparation, qualification,and characterization. This should be summarized in the QOS.In terms of the qualification data that may be requested, it is expected that these reference standards be of the highest possible purity (e.g. may necessitate an additionalrecrystallization beyond those used in the normal manufacturing process of the activeingredient) and be fully characterized (e.g. may necessitate in the qualification reportadditional characterization information such as proof of structure via NMR) beyond theidentification tests that are typically reported in a drug substance COA. StandardLaboratory Practice for preparation of reference standards entails recrystallization toconstant physical measurements or to literature values for the pure material.2.3.S.6 Container Closure SystemWhat container closure is used for packaging and storage of the drug substance?Can this question be answered by reference to a DMF?Yes.。

iso 0.0-18.1 一个用来进行等距回归的包说明书

iso 0.0-18.1 一个用来进行等距回归的包说明书

Package‘Iso’October12,2022Version0.0-18.1Date2019-06-05Title Functions to Perform Isotonic RegressionAuthor Rolf Turner<********************.nz>Maintainer Rolf Turner<********************.nz>Depends R(>=1.7.0)Description Linear order and unimodal order(univariate)isotonic regression;bivariate isotonic regressionwith linear order on both variables.LazyData trueLicense GPL(>=2)URL /~rolf/NeedsCompilation yesRepository CRANDate/Publication2020-05-2605:13:34UTCR topics documented:biviso (2)pava (4)ufit (6)vigour (8)Index91biviso Bivariate isotonic regression.DescriptionBivariate isotonic regression with respect to simple(increasing)linear ordering on both variables.Usagebiviso(y,w=NULL,eps=NULL,eps2=1e-9,ncycle=50000,fatal=TRUE,warn=TRUE)Argumentsy The matrix of observations to be isotonized.It must of course have at least two rows and at least two columns.w A matrix of weights,greater than or equal to zero,of the same dimension as y.If left NULL then w is created as a matrix all of whose entries are equal to1.eps Convergence criterion.The algorithm is deemed to have converged if each en-try of the output matrix,after the completion of the current iteration,does notdiffer by more than eps from the corresponding entry of the matrix after thecompletion of the previous iteration.If this argument is not supplied it defaultsto sqrt(.Machine$double.eps).eps2Criterion used to determine whether isotonicity is“violated”,whence whether (further)application of the“pool adjacent violators”procedure is required.ncycle The maximum number of cycles of the iteration procedure.Must be at least2 (otherwise an error is given).If the procedure has not converged after ncycleiterations then an error is given.(See below.)fatal Logical scalar.Should the function stop if the subroutine returns an error code other than0or4?If fatal is FALSE then output is returned by the functioneven if there was a“serious”fault.One can set fatal=FALSE to inspect thevalues of the objective matrix at various interim stages prior to convergence.See Examples.warn Logical scalar.Should a warning be produced if the subroutine returns a value of ifault equal to4(or to any other non-zero value when fatal has been setto FALSE)?DetailsSee the paper by Bril et al.,(References)and the references cited therein for details.ValueA matrix of the same dimensions as y containing the corresponding isotonic values.It has anattribute icycle equal to the number of cycles required to achieve convergence of the algorithm.Error MessagesThe subroutine comprising Algorithm AS206produces an error code ifault with values from0 to6The meaning of these codes is as follows:•0:No error.•1:Convergence was not attained in ncycle cycles.•2:At least one entry of w was negative.•3:Either nrow(y)or ncol(y)was less than2.•4:A near-zero weight less than delta=0.00001was replaced by delta.•5:Convergence was not attained and a non-zero weight was replaced by delta.•6:All entries of w were less than delta.If ifault==4a warning is given.All of the other non-zero values of ifault result in an error being given.WARNINGThis function appears not to achieve exact isotonicity,at least not quite.For instance one can do: set.seed(42)u<-matrix(runif(400),20,20)iu<-biviso(u)any(apply(iu,2,is.unsorted))and get TRUE.It turns out that columns13,14,and16of iu have exceptions to isotonicity. E.g.six of the values of diff(iu[,13])are less than zero.However only one of these is less than sqrt(.Machine$double.eps),and then only“marginally”smaller.So some of these negative values are“numerically different”from zero,but not by much.The largest in magnitude in this example,from column16,is-2.217624e-08—which is probably not of“practical importance”.Note also that this example occurs in a very artificial context in which there is no actual isotonic structure underlying the data.Author(s)Rolf Turner<********************.nz>/~rolfReferencesBril,Gordon;Dykstra,Richard;Pillers Carolyn,and Robertson,Tim;Isotonic regression in two independent variables;Algorithm AS206;JRSSC(Applied Statistics),vol.33,no.3,pp.352-357, 1984.See Alsopava()pava.sa()ufit()4pavaExamplesx<-1:20y<-1:10xy<-outer(x,y,function(a,b){a+b+0.5*a*b})+rnorm(200)ixy<-biviso(xy)set.seed(42)u<-matrix(runif(400),20,20)v<-biviso(u)progress<-list()for(n in1:9)progress[[n]]<-biviso(u,ncycle=50*n,fatal=FALSE,warn=FALSE)pava Linear order isotonic regression.DescriptionThe“pool adjacent violators algorithm”(PA V A)is applied to calculate the isotonic regression of a set of data,with respect to the usual increasing(or decreasing)linear ordering on the indices.Usagepava(y,w,decreasing=FALSE,long.out=FALSE,stepfun=FALSE)pava.sa(y,w,decreasing=FALSE,long.out=FALSE,stepfun=FALSE)Argumentsy Vector of data whose isotonic regression is to be calculated.w Optional vector of weights to be used for calculating a weighted isotonic regres-sion;if w is not given,all weights are taken to equal1.decreasing Logical scalar;should the isotonic regression be calculated with respect to de-creasing(rather than increasing)order?long.out Logical argument controlling the nature of the value returned.stepfun Logical scalar;if TRUE a step function representation of the isotonic regression is returned.DetailsThe function pava()uses dynamically loading of a fortran subroutine"pava"to effect the computa-tions.The function pava.sa()("sa"for"stand-alone")does all of the computations in raw R.Thus pava.sa()could be considerably slower for large data sets.The x values for the step function returned by these functions(if stepfun is TRUE)are thought of as being1,2,...,n=length(y).The knots of the step function are the x values(indices)following changes in the y values(i.e.the starting indices of the level sets,except for thefirst level set).The y value corresponding to thefirst level set is the“left hand”value of y or yleft.The step function is formed using the default arguments of stepfun().In particular it is right continuous.pava5 ValueIf long.out is TRUE then the result returned consists of a list whose components are:y thefitted valuesw thefinal weightstr a set of indices made up of the smallest index in each level set,which thus"keeps track"of the level sets.h a step function which represents the results of the isotonic regression.Thiscomponent is present only if stepfun is TRUE.If long.out is FALSE and stepfun is TRUE then only the step function is returned.If long.out and stepfun are both FALSE then only the vector offitted values is returned.Author(s)Rolf Turner<********************.nz>/~rolfReferencesRobertson,T.,Wright,F.T.and Dykstra,R.L.(1988).Order Restricted Statistical Inference.Wiley, New York.See Alsoufit()stepfun()biviso()Examples#Increasing order:y<-(1:20)+rnorm(20)ystar<-pava(y)plot(y)lines(ystar,type= s )#Decreasing order:z<-NULLfor(i in4:8){z<-c(z,rep(8-i+1,i)+0.05*(0:(i-1)))}zstar<-pava(z,decreasing=TRUE)plot(z)lines(zstar,type= s )#Using the stepfunction:zstar<-pava(z,decreasing=TRUE,stepfun=TRUE)plot(z)plot(zstar,add=TRUE,verticals=FALSE,pch=20,col.points="red")ufit Unimodal isotonic regression.DescriptionA"divide and conquer"algorithm is applied to calculate the isotonic regression of a set of data,fora unimodal order.If the mode of the unimodal order is not specified,then the optimal(in terms ofminimizing the error sum of squares)unimodalfit is calculated.Usageufit(y,lmode=NULL,x=NULL,w=NULL,lc=TRUE,rc=TRUE,type=c("raw","stepfun","both"))Argumentsy Vector of data whose isotonic regression is to be calculated.lmode Gives the location of the mode if this is specified;if the location is not specified, then all possible modes are tried and that one giving the smallest error sum ofsquares is used.x A largely notional vector of x values corresponding to the data vector y;the value of the mode must be given,or will be calculated in terms of these x values.Conceptually the model is y=m(x)+E,where m()is a unimodal function withmode at lmode,and where E is random"error".If x is not specified,it defaultsto an equi-spaced sequence on[0,1].w Optional vector of weights to be used for calculating a weighted isotonic regres-sion;if w is not given,all weights are taken to equal1.lc Logical argument;should the isotonization be left continuous?If lc==FALSE then the value of the isotonization just before the mode is set to NA,which causesline plots to have a jump discontinuity at(just to the left of)the mode.Thedefault is lc=TRUE.rc Logical argument;should the isotonization be right continuous?If rc==FALSE then the value of the isotonization just after the mode is set to NA,which causesline plots to have a jump discontinuity at(just to the right of)the mode.Thedefault is rc=TRUE.type String specifying the type of the output;see“Value”.May be abbreviated. DetailsDynamically loads fortran subroutines"pava","ufit"and"unimode"to do the actual work.ValueIf type=="raw"then the value is a list with components:x The argument x if this is specified,otherwise the default value.y Thefitted values.lmode The argument lmode if this is specified,otherwise the value of lmode which is found to minimize the error sum of squares.mse The mean squared error.If type=="both"then a component h which is the step function representation of the isotonic regression is added to the foregoing list.If type=="stepfun"then only the step function representation h is returned.Author(s)Rolf Turner<********************.nz>/~rolfReferencesMureika,R.A.,Turner,T.R.and Wollan,P.C.(1992).An algorithm for unimodal isotonic re-gression,with application to locating a maximum.University of New Brunswick Department of Mathematics and Statistics Technical Report Number92–4.Robertson,T.,Wright,F.T.and Dykstra,R.L.(1988).Order Restricted Statistical Inference.Wiley, New York.Shi,Ning-Zhong.(1988)A test of homogeneity for umbrella alternatives and tables of the level mun.Statist.—Theory Meth.vol.17,pp.657–670.Turner,T.R.,and Wollan,P.C.(1997)Locating a maximum using isotonic puta-tional Statistics and Data Analysis vol.25,pp.305–320.See Alsopava()biviso()Examplesx<-c(0.00,0.34,0.67,1.00,1.34,1.67,2.00,2.50,3.00,3.50,4.00,4.50,5.00,5.50,6.00,8.00,12.00,16.00,24.00)y<-c(0.0,61.9,183.3,173.7,250.6,238.1,292.6,293.8,268.0,285.9,258.8,297.4,217.3,226.4,170.1,74.2,59.8,4.1,6.1)z<-ufit(y,x=x,type="b")plot(x,y)lines(z,col="red")plot(z$h,do.points=FALSE,col.hor="blue",col.vert="blue",add=TRUE)8vigour vigour vigourDescriptionGrowth vigour of stands of spruce trees in New Brunswick,Canada.Usagedata("vigour")FormatA data frame with23observations(rows).Thefirst column is the year of observation(1965to1987inclusive).The otherfive columns are observations on the vigour of growth of the given stand in each of the years.DetailsThe stands each had different initial tree densities.It was expected that vigour would initially increase(as the trees increased in size)and then level off and start to decrease as the growing trees encroached upon each others’space and competed more strongly for resources such as moisture, nutrients,and light.It was further expected that the position of the mode of the vigour observations would depend upon the initial densities.SourceThese data were collected and generously made available by Kirk Schmidt who was at the time of collecting the data a graduate student in the Department of Forest Engineering at the University of New Brunswick,Fredericton,New Brunswick,Canada.The data were collected as part of his research for his Master’s degree(supervised by Professor Ted Needham)at the University of New Brunswick.See Schmidt(1993).ReferencesK.D.Schmidt(1993).Development of a precommercial thinning guide for black spruce.Thesis (M.Sc.F.),University of New Brunswick,Faculty of Forestry.Examplesmatplot(vigour[,1],vigour[,2:6],main="Growth vigour of stands of New Brunswick spruce",xlab="year",ylab="vigour",type="b")Index∗datasetsvigour,8∗nonlinearbiviso,2pava,4ufit,6∗regressionbiviso,2pava,4ufit,6biviso,2,5,7pava,3,4,7pava.sa,3stepfun,5ufit,3,5,6vigour,89。

倾向计分(积分,匹配)法PSM_SSWR_2004

倾向计分(积分,匹配)法PSM_SSWR_2004

“comparison” group are compared to only the best cases from the treatment group, the result may be regression toward the mean
• makes the comparison group look better • Makes the treatment group look worse.
focused on the problem of selection biases, and traditional approaches to program evaluation, including randomized experiments, classical matching, and statistical controls. Heckman later developed “Difference-in-differences” method
NSCAW data used to illustrate PSM were collected under funding by the Administration on Children, Youth, and Families of the U.S. Department of Health and Human Services. Findings do not represent the official position or policies of the U.S. DHHS. PSM analyses were partially funded by the Robert Wood Johnson Foundation and the Childrens Bureau’s Child Welfare Research Fellowship. Results are preliminary and not quotable. Contact information: sguo@

互换性与测量技术(汉英双语)

互换性与测量技术(汉英双语)
• Tolerances and fits. • This book includes tables and calculations for easy option of fits of machine parts and determination of their dimensional tolerances and deviations. Using this tool the following tasks can be solved: • Selection of suitable fits of machine parts according to the international standard ISO 286. • Determination of dimensional tolerances and deviations of machine parts according to the international standard ISO 286.
K、M、N upper deviation is basic deviation >IT8 ES=-ei <=IT8 ES=-ei+△ △=ITn-ITn-1 J only 6.7.8 grade j、k lower deviation is basic deviation j only 5.6.7.8 grade k is specific, divided into two part.One is <IT3 or >IT7;other is IT3-IT7
Characteristics and Application
A-H lower deviation is basic deviation,it is positive a-h upper deviation is basic deviation,it isnegative

倾向匹配分析深度(Propsensitymatchinganalysis)

倾向匹配分析深度(Propsensitymatchinganalysis)

倾向匹配分析深度(Propsensity matching analysis)有很多现象和关联似乎显而易见,然而证明这些“简单”的现象和关联的过程,可能极其消耗人力和物力。

在20世纪30年代之前,匹配法(也称控制法)在因果研究中占据了压倒性的地位,科学家认为只有将实验组和对照组的所有情况都尽可能接近,才能两组间的差异是否归于处理因素。

但是,在要让实验组和对照组之前的特征(混杂)尽可能匹配,不仅难以操作,而且会消耗大量资源,尤其在很多情况下,很多因素是试验者难以去控制的。

随机化概念的起源在伊利诺伊大学的莫柔地块(University of Illinois , Morrow Plots),Fisher通过“分割地块实验”(( Split-Plot Experiment ),成功证明了一个在今天看来可能极其简单,但却耗费了几代科学家上百年努力的结论:土壤的质量是农业生产率最关键的决定因素(Soil quality is a vital component of agricultural productivity)。

并开拓了如今广为人知的方差分析(ANOVA),将随机实验法纳入了因果分析的殿堂,成为因果分析的金标准。

为什么要做倾向值分析在卫生领域,随机临床试验(RCT)是应用随机实验法最典型的例子。

为了证明某种处理(或因素)的作用,将研究对象随机分组并进行前瞻性的研究,可以最大程度上确保已知和未知的混杂因素对各组的影响均衡,阐明处理因素的真实效应。

但RCT对研究对象严格的纳入和排除标准,无疑会影响研究结果的外推,同时费用和组织困难问题很多时候都是让人难以承受的。

此外,很多研究问题无法做到随机,甚至有些情况下的随机是违反伦理道德的。

而非随机对照研究(如观察性研究和非随机干预研究)能够较好地耐受RCT中存在的问题,在实际应用中更为广泛。

如何利用非随机化研究的资料探究因果,一直是流行病学和统计学研究中非常关注的问题。

Solution_国际财务管理_切奥尔尤恩_课后习题答案_第十四章

Solution_国际财务管理_切奥尔尤恩_课后习题答案_第十四章

CHAPTER 14 INTEREST RATE AND CURRENCY SWAPSSUGGESTED ANSWERS AND SOLUTIONS TO END-OF-CHAPTERQUESTIONS AND PROBLEMSQUESTIONS1. Describe the difference between a swap broker and a swap dealer.Answer: A swap broker arranges a swap between two counterparties for a fee without taking a risk position in the swap. A swap dealer is a market maker of swaps and assumes a risk position in matching opposite sides of a swap and in assuring that each counterparty fulfills its contractual obligation to the other.2. What is the necessary condition for a fixed-for-floating interest rate swap to be possible?Answer: For a fixed-for-floating interest rate swap to be possible it is necessary for a quality spread differential to exist. In general, the default-risk premium of the fixed-rate debt will be larger than the default-risk premium of the floating-rate debt.3. Discuss the basic motivations for a counterparty to enter into a currency swap.Answer: One basic reason for a counterparty to enter into a currency swap is to exploit the comparative advantage of the other in obtaining debt financing at a lower interest rate than could be obtained on its own. A second basic reason is to lock in long-term exchange rates in the repayment of debt service obligations denominated in a foreign currency.4. How does the theory of comparative advantage relate to the currency swap market?Answer: Name recognition is extremely important in the international bond market. Without it, even a creditworthy corporation will find itself paying a higher interest rate for foreign denominated funds than a local borrower of equivalent creditworthiness. Consequently, two firms of equivalent creditworthiness can each exploit their, respective, name recognition by borrowing in their local capital market at a favorable rate and then re-lending at the same rate to the other.5. Discuss the risks confronting an interest rate and currency swap dealer.Answer: An interest rate and currency swap dealer confronts many different types of risk. Interest rate risk refers to the risk of interest rates changing unfavorably before the swap dealer can lay off on an opposing counterparty the unplaced side of a swap with another counterparty. Basis risk refers to the floating rates of two counterparties being pegged to two different indices. In this situation, since the indexes are not perfectly positively correlated, the swap bank may not always receive enough floating rate funds from one counterparty to pass through to satisfy the other side, while still covering its desired spread, or avoiding a loss. Exchange-rate risk refers to the risk the swap bank faces from fluctuating exchange rates during the time it takes the bank to lay off a swap it undertakes on an opposing counterparty before exchange rates change. Additionally, the dealer confronts credit risk from one counterparty defaulting and its having to fulfill the defaulting party’s obligation to the other counterparty. Mismatch risk refers to the difficulty of the dealer finding an exact opposite match for a swap it has agreed to take. Sovereign risk refers to a country imposing exchange restrictions on a currency involved in a swap making it costly, or impossible, for a counterparty to honor its swap obligations to the dealer. In this event, provisions exist for the early termination of a swap, which means a loss of revenue to the swap bank.6. Briefly discuss some variants of the basic interest rate and currency swaps diagramed in the chapter.Answer: Instead of the basic fixed-for-floating interest rate swap, there are also zero-coupon-for-floating rate swaps where the fixed rate payer makes only one zero-coupon payment at maturity on the notional value. There are also floating-for-floating rate swaps where each side is tied to a different floating rate index or a different frequency of the same index. Currency swaps need not be fixed-for-fixed; fixed-for-floating and floating-for-floating rate currency swaps are frequently arranged. Moreover, both currency and interest rate swaps can be amortizing as well as non-amortizing.7. If the cost advantage of interest rate swaps would likely be arbitraged away in competitive markets, what other explanations exist to explain the rapid development of the interest rate swap market?Answer: All types of debt instruments are not always available to all borrowers. Interest rate swaps can assist in market completeness. That is, a borrower may use a swap to get out of one type of financing and to obtain a more desirable type of credit that is more suitable for its asset maturity structure.8. Suppose Morgan Guaranty, Ltd. is quoting swap rates as follows: 7.75 - 8.10 percent annually against six-month dollar LIBOR for dollars and 11.25 - 11.65 percent annually against six-month dollar LIBOR for British pound sterling. At what rates will Morgan Guaranty enter into a $/£ currency swap?Answer: Morgan Guaranty will pay annual fixed-rate dollar payments of 7.75 percent against receiving six-month dollar LIBOR flat, or it will receive fixed-rate annual dollar payments at 8.10 percent against paying six-month dollar LIBOR flat. Morgan Guaranty will make annual fixed-rate £ payments at 11.25 percent against receiving six-month dollar LIBOR flat, or it will receive annual fixed-rate £ payments at 11.65 percent against paying six-month dollar LIBOR flat. Thus, Morgan Guaranty will enter into a currency swap in which it would pay annual fixed-rate dollar payments of 7.75 percent in return for receiving semi-annual fixed-rate £ payments at 11.65 percent, or it will receive annual fixed-rate dollar payments at 8.10 percent against paying annual fixed-rate £ payments at 11.25 percent.9. A U.S. company needs to raise €50,000,000. It plans to raise this money by issuing dollar-denominated bonds and using a currency swap to convert the dollars to euros. The company expects interest rates in both the United States and the euro zone to fall.a. Should the swap be structured with interest paid at a fixed or a floating rate?b. Should the swap be structured with interest received at a fixed or a floating rate?CFA Guideline Answer:a. The U.S. company would pay the interest rate in euros. Because it expects that the interest rate in the euro zone will fall in the future, it should choose a swap with a floating rate on the interest paid in euros to let the interest rate on its debt float down.b. The U.S. company would receive the interest rate in dollars. Because it expects that the interest rate in the United States will fall in the future, it should choose a swap with a fixed rate on the interest received in dollars to prevent the interest rate it receives from going down.*10. Assume a currency swap in which two counterparties of comparable credit risk each borrow at the best rate available, yet the nominal rate of one counterparty is higher than the other. After the initial principal exchange, is the counterparty that is required to make interest payments at the higher nominal rate at a financial disadvantage to the other in the swap agreement? Explain your thinking.Answer: Superficially, it may appear that the counterparty paying the higher nominal rate is at a disadvantage since it has borrowed at a lower rate. However, if the forward rate is an unbiased predictor of the expected spot rate and if IRP holds, then the currency with the higher nominal rate is expected to depreciate versus the other. In this case, the counterparty making the interest payments at the higher nominal rate is in effect making interest payments at the lower interest rate because the payment currency is depreciating in value versus the borrowing currency.PROBLEMS1. Alpha and Beta Companies can borrow for a five-year term at the following rates:Alpha BetaMoody’s credit rating Aa BaaFixed-rate borrowing cost 10.5% 12.0%Floating-rate borrowing cost LIBOR LIBOR + 1%a. Calculate the quality spread differential (QSD).b. Develop an interest rate swap in which both Alpha and Beta have an equal cost savings in their borrowing costs. Assume Alpha desires floating-rate debt and Beta desires fixed-rate debt. No swap bank is involved in this transaction.Solution:a. The QSD = (12.0% - 10.5%) minus (LIBOR + 1% - LIBOR) = .5%.b. Alpha needs to issue fixed-rate debt at 10.5% and Beta needs to issue floating rate-debt at LIBOR + 1%. Alpha needs to pay LIBOR to Beta. Beta needs to pay 10.75% to Alpha. If this is done, Alpha’s floating-rate all-in-cost is: 10.5% + LIBOR - 10.75% = LIBOR - .25%, a .25% savings over issuing floating-rate debt on its own. Beta’s fixed-rate all-in-cost is: LIBOR+ 1% + 10.75% - LIBOR = 11.75%, a .25% savings over issuing fixed-rate debt.2. Do problem 1 over again, this time assuming more realistically that a swap bank is involved as an intermediary. Assume the swap bank is quoting five-year dollar interest rate swaps at 10.7% - 10.8% against LIBOR flat.Solution: Alpha will issue fixed-rate debt at 10.5% and Beta will issue floating rate-debt at LIBOR + 1%. Alpha will receive 10.7% from the swap bank and pay it LIBOR. Beta will pay 10.8% to the swap bank and receive from it LIBOR. If this is done, Alpha’s floating-rate all-in-cost is: 10.5% + LIBOR - 10.7% = LIBOR - .20%, a .20% savings over issuing floating-rate debt on its own. Beta’s fixed-rate all-in-cost is: LIBOR+ 1% + 10.8% - LIBOR = 11.8%, a .20% savings over issuing fixed-rate debt.3. Company A is a AAA-rated firm desiring to issue five-year FRNs. It finds that it can issue FRNs at six-month LIBOR + .125 percent or at three-month LIBOR + .125 percent. Given its asset structure, three-month LIBOR is the preferred index. Company B is an A-rated firm that also desires to issue five-year FRNs. It finds it can issue at six-month LIBOR + 1.0 percent or at three-month LIBOR + .625 percent. Given its asset structure, six-month LIBOR is the preferred index. Assume a notional principal of $15,000,000. Determine the QSD and set up a floating-for-floating rate swap where the swap bank receives .125 percent and the two counterparties share the remaining savings equally.Solution: The quality spread differential is [(Six-month LIBOR + 1.0 percent) minus (Six-month LIBOR + .125 percent) =] .875 percent minus [(Three-month LIBOR + .625 percent) minus (Three-month LIBOR + .125 percent) =] .50 percent, which equals .375 percent. If the swap bank receives .125 percent, each counterparty is to save .125 percent. To affect the swap, Company A would issue FRNs indexed to six-month LIBOR and Company B would issue FRNs indexed three-month LIBOR. Company B might make semi-annual payments of six-month LIBOR + .125 percent to the swap bank, which would pass all of it through to Company A. Company A, in turn, might make quarterly payments of three-month LIBOR to the swap bank, which would pass through three-month LIBOR - .125 percent to Company B. On an annualized basis, Company B will remit to the swap bank six-month LIBOR + .125 percent and pay three-month LIBOR + .625 percent on its FRNs. It will receive three-month LIBOR - .125 percent from the swap bank. This arrangement results in an all-in cost of six-month LIBOR + .825 percent, which is a rate .125 percent below the FRNs indexed to six-month LIBOR + 1.0 percent Company B could issue on its own. Company A will remit three-month LIBOR to the swap bank and pay six-month LIBOR + .125 percent on its FRNs. It will receive six-month LIBOR + .125 percent from the swap bank. This arrangement results in an all-in cost of three-month LIBOR for Company A, which is .125 percent less than the FRNs indexed to three-month LIBOR + .125 percent it could issue on its own. The arrangements with the two counterparties net the swap bank .125 percent per annum, received quarterly.*4. A corporation enters into a five-year interest rate swap with a swap bank in which it agrees to pay the swap bank a fixed rate of 9.75 percent annually on a notional amount of €15,000,000 and receive LIBOR. As of the second reset date, determine the price of the swap from the corporation’s viewpoint assuming that the fixed-rate side of the swap has increased to 10.25 percent.Solution: On the reset date, the present value of the future floating-rate payments the corporation will receive from the swap bank based on the notional value will be €15,000,000. The present value of a hypothetical bond issue of €15,000,000 with three remaining 9.75 percent coupon payments at the newfixed-rate of 10.25 percent is €14,814,304. This sum represents the present value of the remaining payments the swap bank will receive from the corporation. Thus, the swap bank should be willing to buy and the corporation should be willing to sell the swap for €15,000,000 - €14,814,304 = €185,696.5. Karla Ferris, a fixed income manager at Mangus Capital Management, expects the current positively sloped U.S. Treasury yield curve to shift parallel upward.Ferris owns two $1,000,000 corporate bonds maturing on June 15, 1999, one with a variable rate based on 6-month U.S. dollar LIBOR and one with a fixed rate. Both yield 50 basis points over comparable U.S. Treasury market rates, have very similar credit quality, and pay interest semi-annually.Ferris wished to execute a swap to take advantage of her expectation of a yield curve shift and believes that any difference in credit spread between LIBOR and U.S. Treasury market rates will remain constant.a. Describe a six-month U.S. dollar LIBOR-based swap that would allow Ferris to take advantage of her expectation. Discuss, assuming Ferris’ expectation is correct, the change in the swap’s value and how that change would affect the value of her portfolio. [No calculations required to answer part a.] Instead of the swap described in part a, Ferris would use the following alternative derivative strategy to achieve the same result.b. Explain, assuming Ferris’ expectation is correct, how the following strategy achieves the same result in response to the yield curve shift. [No calculations required to answer part b.]Date Nominal Eurodollar Futures Contract ValueSettlement12-15-97 $1,000,00003-15-98 1,000,00006-15-98 1,000,00009-15-98 1,000,00012-15-98 1,000,00003-15-99 1,000,000c. Discuss one reason why these two derivative strategies provide the same result.CFA Guideline Answera.The Swap Value and its Effect on Ferris’ PortfolioBecause Karla Ferris believes interest rates will rise, she will want to swap her $1,000,000 fixed-rate corporate bond interest to receive six-month U.S. dollar LIBOR. She will continue to hold her variable-rate six-month U.S. dollar LIBOR rate bond because its payments will increase as interest rates rise. Because the credit risk between the U.S. dollar LIBOR and the U.S. Treasury market is expected to remain constant, Ferris can use the U.S. dollar LIBOR market to take advantage of her interest rate expectation without affecting her credit risk exposure.To execute this swap, she would enter into a two-year term, semi-annual settle, $1,000,000 nominal principal, pay fixed-receive floating U.S. dollar LIBOR swap. If rates rise, the swap’s mark-to-market value will increase because the U.S. dollar LIBOR Ferris receives will be higher than the LIBOR rates from which the swap was priced. If Ferris were to enter into the same swap after interest rates rise, she would pay a higher fixed rate to receive LIBOR rates. This higher fixed rate would be calculated as the present value of now higher forward LIBOR rates. Because Ferris would be paying a stated fixed rate that is lower than this new higher-present-value fixed rate, she could sell her swap at a premium. This premium is called the “replacement cost” value of the swap.b. Eurodollar Futures StrategyThe appropriate futures hedge is to short a combination of Eurodollar futures contracts with different settlement dates to match the coupon payments and principal. This futures hedge accomplishes the same objective as the pay fixed-receive floating swap described in Part a. By discussing how the yield-curve shift affects the value of the futures hedge, the candidate can show an understanding of how Eurodollar futures contracts can be used instead of a pay fixed-receive floating swap.If rates rise, the mark-to-market values of the Eurodollar contracts decrease; their yields must increase to equal the new higher forward and spot LIBOR rates. Because Ferris must short or sell the Eurodollar contracts to duplicate the pay fixed-receive variable swap in Part a, she gains as the Eurodollar futures contracts decline in value and the futures hedge increases in value. As the contracts expire, or if Ferris sells the remaining contracts prior to maturity, she will recognize a gain that increases her return. With higher interest rates, the value of the fixed-rate bond will decrease. If the hedge ratios are appropriate, the value of the portfolio, however, will remain unchanged because of the increased value of the hedge, which offsets the fixed-rate bond’s decrease.c. Why the Derivative Strategies Achieve the Same ResultArbitrage market forces make these two strategies provide the same result to Ferris. The two strategies are different mechanisms for different market participants to hedge against increasing rates. Some money managers prefer swaps; others, Eurodollar futures contracts. Each institutional marketparticipant has different preferences and choices in hedging interest rate risk. The key is that market makers moving into and out of these two markets ensure that the markets are similarly priced and provide similar returns. As an example of such an arbitrage, consider what would happen if forward market LIBOR rates were lower than swap market LIBOR rates. An arbitrageur would, under such circumstances, sell the futures/forwards contracts and enter into a received fixed-pay variable swap. This arbitrageur could now receive the higher fixed rate of the swap market and pay the lower fixed rate of the futures market. He or she would pocket the differences between the two rates (without risk and without having to make any [net] investment.) This arbitrage could not last.As more and more market makers sold Eurodollar futures contracts, the selling pressure would cause their prices to fall and yields to rise, which would cause the present value cost of selling the Eurodollar contracts also to increase. Similarly, as more and more market makers offer to receive fixed rates in the swap market, market makers would have to lower their fixed rates to attract customers so they could lock in the lower hedge cost in the Eurodollar futures market. Thus, Eurodollar forward contract yields would rise and/or swap market receive-fixed rates would fall until the two rates converge. At this point, the arbitrage opportunity would no longer exist and the swap and forwards/futures markets would be in equilibrium.6. Rone Company asks Paula Scott, a treasury analyst, to recommend a flexible way to manage the company’s financial risks.Two years ago, Rone issued a $25 million (U.S.$), five-year floating rate note (FRN). The FRN pays an annual coupon equal to one-year LIBOR plus 75 basis points. The FRN is non-callable and will be repaid at par at maturity.Scott expects interest rates to increase and she recognizes that Rone could protect itself against the increase by using a pay-fixed swap. However, Rone’s Board of Directors prohibits both short sales of securities and swap transactions. Scott decides to replicate a pay-fixed swap using a combination of capital market instruments.a. Identify the instruments needed by Scott to replicate a pay-fixed swap and describe the required transactions.b. Explain how the transactions in Part a are equivalent to using a pay-fixed swap.CFA Guideline Answera. The instruments needed by Scott are a fixed-coupon bond and a floating rate note (FRN).The transactions required are to:· issue a fixed-coupon bond with a maturity of three years and a notional amount of $25 million, and· buy a $25 million FRN of the same maturity that pays one-year LIBOR plus 75 bps.b. At the outset, Rone will issue the bond and buy the FRN, resulting in a zero net cash flow at initiation. At the end of the third year, Rone will repay the fixed-coupon bond and will be repaid the FRN, resulting in a zero net cash flow at maturity. The net cash flow associated with each of the three annual coupon payments will be the difference between the inflow (to Rone) on the FRN and the outflow (to Rone) on the bond. Movements in interest rates during the three-year period will determine whether the net cash flow associated with the coupons is positive or negative to Rone. Thus, the bond transactions are financially equivalent to a plain vanilla pay-fixed interest rate swap.7. A company based in the United Kingdom has an Italian subsidiary. The subsidiary generates €25,000,000 a year, received in equivalent semiannual installments of €12,500,000. The British company wishes to convert the euro cash flows to pounds twice a year. It plans to engage in a currency swap in order to lock in the exchange rate at which it can convert the euros to pounds. The current exchange rate is €1.5/£. The fixed rate on a plain vaninilla currency swap in pounds is 7.5 percent per year, and the fixed rate on a plain vanilla currency swap in euros is 6.5 percent per year.a. Determine the notional principals in euros and pounds for a swap with semiannual payments that will help achieve the objective.b. Determine the semiannual cash flows from this swap.CFA Guideline Answera. The semiannual cash flow must be converted into pounds is €25,000,000/2 = €12,500,000. In order to create a swap to convert €12,500,000, the equivalent notional principals are · Euro notional principal = €12,500,000/(0.065/2) = €384,615,385· Pound notional principal = €384,615,385/€1.5/£ = £256,410,257b. The cash flows from the swap will now be· Company makes swap payment = €384,615,385(0.065/2) = €12,500,000· Company receives swap payment = £256,410,257(0.075/2) = £9,615,385The company has effectively converted euro cash receipts to pounds.8. Ashton Bishop is the debt manager for World Telephone, which needs €3.33 billion Euro financing for its operations. Bishop is considering the choice between issuance of debt denominated in: ∙ Euros (€), or∙ U.S. dollars, accompanied by a combined interest rate and currency swap.a. Explain one risk World would assume by entering into the combined interest rate and currency swap.Bishop believes that issuing the U.S.-dollar debt and entering into the swap can lower World’s cost of debt by 45 basis points. Immediately after selling the debt issue, World would swap the U.S. dollar payments for Euro payments throughout the maturity of the debt. She assumes a constant currency exchange rate throughout the tenor of the swap.Exhibit 1 gives details for the two alternative debt issues. Exhibit 2 provides current information about spot currency exchange rates and the 3-year tenor Euro/U.S. Dollar currency and interest rate swap.Exhibit 1World Telephone Debt DetailsCharacteristic Euro Currency Debt U.S. Dollar Currency DebtPar value €3.33 billion $3 billionTerm to maturity 3 years 3 yearsFixed interest rate 6.25% 7.75%Interest payment Annual AnnualExhibit 2Currency Exchange Rate and Swap InformationSpot currency exchange rate $0.90 per Euro ($0.90/€1.00)3-year tenor Euro/U.S. Dollarfixed interest rates 5.80% Euro/7.30% U.S. Dollarb. Show the notional principal and interest payment cash flows of the combined interest rate and currency swap.Note: Your response should show both the correct currency ($ or €) and amount for each cash flow. Answer problem b in the template provided.Template for problem bc. State whether or not World would reduce its borrowing cost by issuing the debt denominated in U.S. dollars, accompanied by the combined interest rate and currency swap. Justify your response with one reason.CFA Guideline Answera. World would assume both counterparty risk and currency risk. Counterparty risk is the risk that Bishop’s counterparty will default on payment of principal or interest cash flows in the swap.Currency risk is the currency exposure risk associated with all cash flows. If the US$ appreciates (Euro depreciates), there would be a loss on funding of the coupon payments; however, if the US$ depreciates, then the dollars will be worth less at the swap’s maturity.b.0 YearYear32 Year1 YearWorld paysNotional$3 billion €3.33 billion PrincipalInterest payment €193.14 million1€193.14 million €193.14 million World receives$3.33 billion €3 billion NotionalPrincipalInterest payment $219 million2$219 million $219 million1 € 193.14 million = € 3.33 billion x 5.8%2 $219 million = $3 billion x 7.3%c. World would not reduce its borrowing cost, because what Bishop saves in the Euro market, she loses in the dollar market. The interest rate on the Euro pay side of her swap is 5.80 percent, lower than the 6.25 percent she would pay on her Euro debt issue, an interest savings of 45 bps. But Bishop is only receiving 7.30 percent in U.S. dollars to pay on her 7.75 percent U.S. debt interest payment, an interest shortfall of 45 bps. Given a constant currency exchange rate, this 45 bps shortfall exactly offsets the savings from paying 5.80 percent versus the 6.25 percent. Thus there is no interest cost savings by sellingthe U.S. dollar debt issue and entering into the swap arrangement.MINI CASE: THE CENTRALIA CORPORATION’S CURRENCY SWAPThe Centralia Corporation is a U.S. manufacturer of small kitchen electrical appliances. It has decided to construct a wholly owned manufacturing facility in Zaragoza, Spain, to manufacture microwave ovens for sale in the European Union. The plant is expected to cost €5,500,000, and to take about one year to complete. The plant is to be financed over its economic life of eight years. The borrowing capacity created by this capital expenditure is $2,900,000; the remainder of the plant will be equity financed. Centralia is not well known in the Spanish or international bond market; consequently, it would have to pay 7 percent per annum to borrow euros, whereas the normal borrowing rate in the euro zone for well-known firms of equivalent risk is 6 percent. Alternatively, Centralia can borrow dollars in the U.S. at a rate of 8 percent.Study Questions1. Suppose a Spanish MNC has a mirror-image situation and needs $2,900,000 to finance a capital expenditure of one of its U.S. subsidiaries. It finds that it must pay a 9 percent fixed rate in the United States for dollars, whereas it can borrow euros at 6 percent. The exchange rate has been forecast to be $1.33/€1.00 in one year. Set up a currency swap that will benefit each counterparty.*2. Suppose that one year after the inception of the currency swap between Centralia and the Spanish MNC, the U.S. dollar fixed-rate has fallen from 8 to 6 percent and the euro zone fixed-rate for euros has fallen from 6 to 5.50 percent. In both dollars and euros, determine the market value of the swap if the exchange rate is $1.3343/€1.00.Suggested Solution to The Centralia Corporation’s Currency Swap1. The Spanish MNC should issue €2,180,500 of 6 percent fixed-rate debt and Centralia should issue $2,900,000 of fixed-rate 8 percent debt, since each counterparty has a relative comparative advantage in their home market. They will exchange principal sums in one year. The contractual exchange rate for the initial exchange is $2,900,000/€2,180,500, or $1.33/€1.00. Annually the counterparties will swap debt service: the Spanish MNC will pay Centralia $232,000 (= $2,900,000 x .08) and Centralia will pay the Spanish MNC €130,830 (= €2,180,500 x .06). The contractual exchange rate of the first seven annual debt service exchanges is $232,000/€130,830, or $1.7733/€1.00. At maturity, Centralia and the Spanish MNC will re-exchange the principal sums and the final debt service payments. The contractual exchange rate of the final currency exchange is $3,132,000/€2,311,330 = ($2,900,000 + $232,000)/(€2,180,500 + €130,830), or $1.3551/€1.00.*2. The market value of the dollar debt is the present value of a seven-year annuity of $232,000 and a lump sum of $2,900,000 discounted at 6 percent. This present value is $3,223,778. Similarly, the market value of the euro debt is the present value of a seven-year annuity of €130,830 and a lump sum of €2,180,500 discounted at 5.50 percent. This present value is €2,242,459. The dollar value of the swap is $3,223,778 - €2,242,459 x 1.3343 = $231,665. The euro value of the swap is €2,242,459 - $3,223,778/1.3343 = -€173,623.。

clogitL1 1.5版本用户指南说明书

clogitL1 1.5版本用户指南说明书

Package‘clogitL1’October12,2022Type PackageTitle Fitting Exact Conditional Logistic Regression with Lasso andElastic Net PenaltiesVersion1.5Date2019-02-01Author Stephen Reid and Robert TibshiraniMaintainer Stephen Reid<*******************>Description Tools for thefitting and cross validation of exact conditional logistic regression mod-els with lasso and elastic net es cyclic coordinate descent and warm starts to com-pute the entire path efficiently.License GPL-2Depends Rcpp(>=0.10.2)LinkingTo RcppNeedsCompilation yesRepository CRANDate/Publication2019-02-0222:33:36UTCR topics documented:clogitL1-package (2)clogitL1 (3)cv.clogitL1 (5)plot.clogitL1 (6)plot.cv.clogitL1 (8)print.clogitL1 (9)summary.clogitL1 (10)summary.cv.clogitL1 (11)Index1312clogitL1-package clogitL1-package Penalised conditional logistic regression.DescriptionTools for thefitting and cross validation of exact conditional logistic regression models with lasso and elastic net es cyclic coordinate descent and warm starts to compute the entire path efficiently.DetailsPackage:clogitL1Type:PackageVersion: 1.4Date:2013-05-06License:GPL-2Very simple to use.The mainfitting function clogitL1accepts x,y data and a strata vector in-dicating stratum membership.Itfits the exact conditional logistic regression model at a grid of regularisation parameters.Only7functions:•clogitL1•cv.clogitL1•plot.clogitL1•plot.cv.clogitL1•print.clogitL1•summary.clogitL1•summary.cv.clogitL1Author(s)Stephen Reid and Rob TibshiraniMaintainer:Stephen Reid<******************>References/v58/i12/clogitL1Conditional logistic regression with elastic net penaltiesDescriptionFit a sequence of conditional logistic regression models with lasso or elastic net penaltiesUsageclogitL1(x,y,strata,numLambda=100,minLambdaRatio=0.000001,switch=0,alpha=1)Argumentsx matrix with rows equalling the number of observations.Contains the p-vector regressor values as rowsy vector of binary responses with1for cases and0for controls.strata vector with stratum membership of each observation.numLambda number of different values of the regularisation parameterλat which to compute parameter estimates.Firstfit is made at value just below smallest regularisationparameter value at which all parameter estimates are0;lastfit made at this valuemultipled by minLambdaRatiominLambdaRatio ratio of smallest to larget value of regularisation parameterλat which wefind parameter estimates.switch index(between0and numLambda)at which we transition from linear to loga-rithmic jumps.alpha parameter controling trade off between lasso and ridge penalties.At value1, we have a pure lasso penalty;at0,pure ridge.Intermediate values provide amixture of the two.DetailsThe sequence of models implied by numLambda and minLambdaRatio isfit by coordinate descent with warm starts and sequential strong rules.If alpha=1,wefit using a lasso penalty.Otherwise wefit with an elastic net penalty.Note that a pure ridge penalty is never obatined,because the function sets afloor for alpha at0.000001.This improves the stability of the algorithm.A similar lower bound is set for minLambdaRatio.The sequence of models can be truncated at fewer than numLambda models if it is found that a very large proportion of training set deviance is explained by the model in question.ValueAn object of type clogitL1with the followingfields:beta(numLambda+1)-by-p matrix of estimated coefficients.First row has all0slambda vector of length numLambda+1containing the value of the regularisation pa-rameter at which we obtained thefits.nz_beta vector of length numLambda+1containing the number of nonzero parameter estimates for thefit at the corresponding regularisation parameter.ss_beta vector of length numLambda+1containing the number of predictors considered by the sequential strong rule at that iteration.dev_perc vector of length numLambda+1containing the percentage of null deviance ex-plained by the model represented by that row in the matrix.y_c reordered vector of responses.Grouped by stratum with cases comingfirst.X_c reordered matrix of predictors.See above.strata_c reordered stratum vector.See above.nVec vector of length the number of unique strata in strata containing the number of observations encountered in each stratum.mVec vector containing the number of cases in each stratum.alpha penalty trade off parameter.References/v58/i12/See Alsoplot.clogitL1Examplesset.seed(145)#data parametersK=10#number of stratan=5#number in stratam=2#cases per stratump=20#predictors#generate datay=rep(c(rep(1,m),rep(0,n-m)),K)X=matrix(rnorm(K*n*p,0,1),ncol=p)#pure noisestrata=sort(rep(1:K,n))par(mfrow=c(1,2))#fit the conditional logistic modelclObj=clogitL1(y=y,x=X,strata)plot(clObj,logX=TRUE)#cross validationclcvObj=cv.clogitL1(clObj)plot(clcvObj)cv.clogitL15cv.clogitL1Cross validation of conditional logistic regression with elastic netpenaltiesDescriptionFind the best of a sequence of conditional logistic regression models with lasso or elastic net penal-ties using cross validationUsagecv.clogitL1(clObj,numFolds=10)ArgumentsclObj an object of type clogitL1on which to do cross validation.numFolds the number of folds used in cross validation.Defaults to the minimum of10or the number of observationsDetailsPerforms numFolds-fold cross validation on an object of type ing the sequence of regularisation parameters generated by clObj,the function chooses strata to leave out randomly.The penalised conditional logistic regression model isfit to the non-left-out strata in turn and its deviance compared to an out-of-sample deviance computed on the left-out strata.Fitting models to individual non-left-out strata proceeds using the cyclic coordinate descent-warm start-strong rule type algorithm used in clogitL1,only with a prespecified sequence ofλ.ValueAn object of type cv.clogitL1with the followingfields:cv_dev matrix of size numLambda-by-numFolds containing the CV deviance in each fold for each value of the regularisation parameter.lambda vector of regularisation parameters.folds vector showing the folds membership of each observation.mean_cv vector containing mean CV deviances for each value of the regularisation pa-rameter.se_cv vector containing an estimate of the standard error of the CV deviance at each value of the regularisation parameter.minCV_lambda value of the regularisation parameter at which we have minimum mean_cv minCV1se_lambdavalue of the regularisation parameter corresponding to the1-SE rule.Selects thesimplest model with estimate CV within1standard deviation of the minimumcv.nz_beta number of nonzero parameter estimates at each value of the regularisation pa-rameter.References/v58/i12/See AlsoclogitL1,plot.cv.clogitL1Examplesset.seed(145)#data parametersK=10#number of stratan=5#number in stratam=2#cases per stratump=20#predictors#generate datay=rep(c(rep(1,m),rep(0,n-m)),K)X=matrix(rnorm(K*n*p,0,1),ncol=p)#pure noisestrata=sort(rep(1:K,n))par(mfrow=c(1,2))#fit the conditional logistic modelclObj=clogitL1(y=y,x=X,strata)plot(clObj,logX=TRUE)#cross validationclcvObj=cv.clogitL1(clObj)plot(clcvObj)plot.clogitL1Plotting afterfitting conditional logistic regression with elastic netpenaltiesDescriptionTakes a clogitL1object and plots the parameter profile associated with it.Usage##S3method for class clogitL1plot(x,logX=T,add.legend=F,bels=T,lty=1:ncol(x$beta),col=1:ncol(x$beta),...)Argumentsx an object of type clogitL1.logX should the horizontal axis be on log scale?add.legend set to TRUE if legend should be printed in top right hand corner.Legend will contain names of variables in data.frame,if specified,otherwise will be num-bered from1to p in order encountered in original input matrix x bels set to TRUE if labels are to be added to curves at leftmost side.If variable names are available,these are plotted,otherwise,curves are numbered from1to p inorder encountered in original input matrix xlty usual’lty’plotting parameter.col usual’col’plotting parameter....additional arguments to plot functionReferences/v58/i12/See AlsoclogitL1Examplesset.seed(145)#data parametersK=10#number of stratan=5#number in stratam=2#cases per stratump=20#predictors#generate datay=rep(c(rep(1,m),rep(0,n-m)),K)X=matrix(rnorm(K*n*p,0,1),ncol=p)#pure noisestrata=sort(rep(1:K,n))par(mfrow=c(1,2))#fit the conditional logistic modelclObj=clogitL1(y=y,x=X,strata)plot(clObj,logX=TRUE)#cross validationclcvObj=cv.clogitL1(clObj)plot(clcvObj)plot.cv.clogitL1Plotting after cross validating conditional logistic regression withelastic net penaltiesDescriptionTakes a cv.clogitL1object and plots the CV deviance curve with standard error bands and minima. Usage##S3method for class cv.clogitL1plot(x,...)Argumentsx an object of type cv.clogitL1....additional arguments to plot functionReferences/v58/i12/See Alsocv.clogitL1Examplesset.seed(145)#data parametersK=10#number of stratan=5#number in stratam=2#cases per stratump=20#predictors#generate datay=rep(c(rep(1,m),rep(0,n-m)),K)X=matrix(rnorm(K*n*p,0,1),ncol=p)#pure noisestrata=sort(rep(1:K,n))par(mfrow=c(1,2))#fit the conditional logistic modelclObj=clogitL1(y=y,x=X,strata)plot(clObj,logX=TRUE)#cross validationclcvObj=cv.clogitL1(clObj)plot(clcvObj)print.clogitL19print.clogitL1Printing afterfitting conditional logistic regression with elastic netpenaltiesDescriptionTakes a clogitL1object and prints a summary of the sequence of modelsfitted.Usage##S3method for class clogitL1print(x,digits=6,...)Argumentsx an object of type clogitL1.digits the number of significant digits after the decimal to be printed...additional arguments to print functionDetailsprints a3column data frame with columns:•Df:number of non-zero parameters in model•DevPerc:percentage of null deviance explained by current model•Lambda:associatedλvalueReferences/v58/i12/See AlsoclogitL1Examplesset.seed(145)#data parametersK=10#number of stratan=5#number in stratam=2#cases per stratump=20#predictors#generate datay=rep(c(rep(1,m),rep(0,n-m)),K)X=matrix(rnorm(K*n*p,0,1),ncol=p)#pure noise10summary.clogitL1 strata=sort(rep(1:K,n))par(mfrow=c(1,2))#fit the conditional logistic modelclObj=clogitL1(y=y,x=X,strata)clObjsummary.clogitL1Summary afterfitting conditional logistic regression with elastic netpenaltiesDescriptionTakes a clogitL1object and produces a summary of the sequence of modelsfitted.Usage##S3method for class clogitL1summary(object,...)Argumentsobject an object of type clogitL1....any additional arguments passed to summary methodDetailsReturns a list with a elements Coefficients,which holds the matrix of coefficients estimated(each row holding the estimates for a given value of the smoothing parameter)and Lambda,which holds the vector of smoothing parameters at whichfits were produced.References/v58/i12/See AlsoclogitL1Examplesset.seed(145)#data parametersK=10#number of stratan=5#number in stratam=2#cases per stratump=20#predictors#generate datay=rep(c(rep(1,m),rep(0,n-m)),K)X=matrix(rnorm(K*n*p,0,1),ncol=p)#pure noisestrata=sort(rep(1:K,n))par(mfrow=c(1,2))#fit the conditional logistic modelclObj=clogitL1(y=y,x=X,strata)summary(clObj)summary.cv.clogitL1Summary after cross validation of conditional logistic regression withelastic net penaltiesDescriptionProvides summary of conditional logistic regression models after cross validationUsage##S3method for class cv.clogitL1summary(object,...)Argumentsobject an object of type cv.clogitL1for which the summary is to be produced....additional arguments to summary method.DetailsExtracts pertinent information from the supplied cv.clogitL1objects.See below for details on output value.ValueA list with the followingfields:lambda_minCV value of regularisation parameter minimising CV deviancebeta_minCV coefficient profile at the minimising value of the regularisation parameter.Whole dataset used to compute estimates.nz_beta_minCV number of non-zero coefficients in the CV deviance minimising coefficient pro-file.lambda_minCV1sevalue of regularisaion parameter minimising CV deviance(using1standard er-ror rule)beta_minCV1se coefficient profile at the1-standard-error-rule value of the regularisation param-eter.Whole dataset used to compute estimates.nz_beta_minCV1senumber of non-zero coefficients in the1-standard-error-rule coefficient profile.References/v58/i12/See AlsoclogitL1,plot.cv.clogitL1Examplesset.seed(145)#data parametersK=10#number of stratan=5#number in stratam=2#cases per stratump=20#predictors#generate datay=rep(c(rep(1,m),rep(0,n-m)),K)X=matrix(rnorm(K*n*p,0,1),ncol=p)#pure noise strata=sort(rep(1:K,n))par(mfrow=c(1,2))#fit the conditional logistic modelclObj=clogitL1(y=y,x=X,strata)plot(clObj,logX=TRUE)#cross validationclcvObj=cv.clogitL1(clObj)summary(clcvObj)IndexclogitL1,3,6,7,9,10,12clogitL1-package,2cv.clogitL1,5,8plot.clogitL1,4,6plot.cv.clogitL1,6,8,12print.clogitL1,9summary.clogitL1,10summary.cv.clogitL1,1113。

CompareTests包:正确性验证偏见的诊断准确性与一致性说明书

CompareTests包:正确性验证偏见的诊断准确性与一致性说明书

Package‘CompareTests’October12,2022Type PackageTitle Correct for Verification Bias in Diagnostic Accuracy&AgreementVersion1.2Date2016-2-6Author Hormuzd A.Katki and David W.EdelsteinMaintainer Hormuzd Katki<***************.gov>Description A standard test is observed on all specimens.We treat the second test(or sam-pled test)as being conducted on only a stratified sample of specimens.Verifica-tion Bias is this situation when the specimens for doing the second(sampled)test is not under in-vestigator control.We treat the total sample as stratified two-phase sampling and use in-verse probability weighting.We estimate diagnostic accuracy(category-specific classifica-tion probabilities;for binary tests reduces to specificity and sensitivity,and also predictive val-ues)and agreement statistics(percent agreement,percent agreement by category,Kappa(un-weighted),Kappa(quadratic weighted)and symmetry tests(reduces to McNemar's test for bi-nary tests)).See:Katki HA,Li Y,Edelstein DW,Castle PE.Estimating the agreement and diag-nostic accuracy of two diagnostic tests when one test is conducted on only a subsample of speci-mens.Stat Med.2012Feb28;31(5)<doi:10.1002/sim.4422>.License GPL-3LazyLoad yesURL /about/staff-directory/biographies/A-J/katki-hormuzd NeedsCompilation noRepository CRANDate/Publication2017-02-0616:37:03R topics documented:CompareTests-package (2)CompareTests (3)fulltable (6)specimens (7)Index912CompareTests-packageCompareTests-package Correct for Verification Bias in Diagnostic Accuracy&AgreementDescriptionA standard test is observed on all specimens.We treat the second test(or sampled test)as beingconducted on only a stratified sample of specimens.Verification Bias is this situation when thespecimens for doing the second(sampled)test is not under investigator control.We treat the totalsample as stratified two-phase sampling and use inverse probability weighting.We estimate diag-nostic accuracy(category-specific classification probabilities;for binary tests reduces to specificityand sensitivity)and agreement statistics(percent agreement,percent agreement by category,Kappa(unweighted),Kappa(quadratic weighted)and symmetry test(reduces to McNemar’s test for binarytests)).DetailsPackage:CompareTestsType:PackageVersion: 1.1Date:2015-06-19License:GPL-3LazyLoad:yesYou have a dataframe with columns"stdtest"(no NAs allowed;all specimens with NA stdtest re-sults are dropped),"sampledtest"(a gold standard which is NA for some specimens),sampling strata"strata1""strata2"(values cannot be missing for any specimens).Correct for Verification Bias in thediagnostic and agreement statistics with CompareTests(stdtest,sampledtest,interaction(strata1,strata2),goldstd="sampledtest")Author(s)Hormuzd A.Katki and David W.EdelsteinMaintainer:Hormuzd Katki<***************.gov>ReferencesKatki HA,Li Y,Edelstein DW,Castle PE.Estimating the agreement and diagnostic accuracy of twodiagnostic tests when one test is conducted on only a subsample of specimens.Stat Med.2012Feb28;31(5):10.1002/sim.4422.Examples#Get specimens datasetdata(specimens)#Get diagnostic and agreement statistics if sampledtest is the gold standardCompareTests(specimens$stdtest,specimens$sampledtest,specimens$stratum)#Get diagnostic and agreement statistics if stdtest is the gold standardCompareTests(specimens$stdtest,specimens$sampledtest,specimens$stratum,goldstd="stdtest")#Get agreement statistics if neither test is a gold standardCompareTests(specimens$stdtest,specimens$sampledtest,specimens$stratum,goldstd=FALSE) CompareTests Correct for Verification Bias in Diagnostic Accuracy&AgreementDescriptionA standard test is observed on all specimens.We treat the second test(or sampled test)as beingconducted on only a stratified sample of specimens.We treat the total sample as stratified two-phase sampling and use inverse probability weighting.We estimate diagnostic accuracy(category-specific classification probabilities;for binary tests reduces to specificity and sensitivity)and agree-ment statistics(percent agreement,percent agreement by category,Kappa(unweighted),Kappa (quadratic weighted)and symmetry tests(reduces to McNemar’s test for binary tests)).UsageCompareTests(stdtest,sampledtest,strata=NA,goldstd="sampledtest") Argumentsstdtest A vector of standard test results.Any NA test results are dropped from the analysis entirely.sampledtest A vector of test results observed only on a sample of specimens.Test results with NA are assumed to no be observed for that specimenstrata The sampling stratum each specimen belongs to.Set to NA if no sampling or simple random sampling.goldstd For outputing diagnostic accuracy statistics,denote if"stdtest"or"sampledtest"is the gold standard.If no gold standard,set to FALSE.ValueOutputs to screen the estimated contingency table of paired test results,agreement statistics,and diagnostic accuracy statistics.Returns a list with the following componentsCells Observed contingency tables of pair test results for each stratumEstCohort Weighted contingency table of each pair of test resultsCellvars Variance of each weighted cell countCellcovars Variance-covariance matrix for each column of weighted cell countsp0Percent agreementVarp0Variance of percent agreementAgrCat Percent agreement by each test categoryVarAgrCat Variance of Percent agreement by each test categoryuncondsymm Symmetry test test statisticMargincovars covariance of each pair of marginsKappa Kappa(unweighted)Kappavar Variance of KappaiPV Each predictive value(for binary tests,NPV and PPV)VarsiPV Variance of each predictive value(for binary tests,NPV and PPV)iCSCP Each category-specific classification probability(for binary tests,specificity and sensitivityVarsiCSCP Variance of each category-specific classification probability(for binary tests, specificity and sensitivityWeightedKappa Kappa(quadratic weights)varWeightedKappaVariance of quadratic-weighted KappaNoteOrder the categories from least to most severe,for binary(-,+)or(0,1)to make sure that what is output as sensitivity is not the specificity,or that PPV is not reported as NPV.If you have multiple variables to be crossed to represent the sampling strata,use interaction(),e.g.strata=interaction(strata1,strata2)Author(s)Hormuzd A.Katki and David W.EdelsteinReferencesKatki HA,Li Y,Edelstein DW,Castle PE.Estimating the agreement and diagnostic accuracy of two diagnostic tests when one test is conducted on only a subsample of specimens.Stat Med.2012Feb 28;31(5):10.1002/sim.4422.Examples##----Should be DIRECTLY executable!!----##--==>Define data,use random,##--or do help(data=index)for the standard data sets.###Stat Med Paper2x2Chlamydia testing verification bias example#Note that p for symmetry test is0.12not0.02as reported in the Stat Med paper###Convert2x2Chlamydia testing table to a dataframe for analysis#Include NAs for the samples where CTDT test was not conducted(HC2was conducted on all) HC2stdtest<-c(rep(1,827),rep(0,4998))stratum<-HC2stdtestCTDTsampledtest<-c(rep(1,800),#1,1cellrep(0,27),#1,0cell HC2+,CTDT-rep(NA,827-800-27),#HC2+,and no CTDT test donerep(1,6),#0,1cell:HC2-,CTDT+rep(0,396),#0,0cell:HC2-and CTDT-rep(NA,4998-6-396)#HC2-,no CTDT test done)chlamydia<-data.frame(stratum,HC2stdtest,CTDTsampledtest)#Analysistemp<-CompareTests(chlamydia$HC2stdtest,chlamydia$CTDTsampledtest,chlamydia$stratum,goldstd="sampledtest")###Example analysis of fictitious data example##data(specimens)temp<-CompareTests(specimens$stdtest,specimens$sampledtest,specimens$stratum,goldstd="sampledtest")##The output is#The weighted contingency table:#as.factor.stdtest.#as.factor.sampledtest.1234#147.887.158 3.3220.000#220.12104.00621.861 2.682#30.0010.83697.4948.823#40.000.000 3.32274.495###Agreement Statistics##pct agree and95%CI:0.8057(0.74380.8555)#pct agree by categories and95%CI#est left right#10.61010.45010.7494#20.62410.53150.7083#30.66930.55620.7658#40.83400.63400.9358#Kappa and95%CI:0.734(0.65090.8032)#Weighted Kappa(quadratic weights)and95%CI:0.8767(0.71070.9536)#symmetry chi-square:9.119p=0.1676fulltable ####Diagnostic Accuracy statistics##est left right#1PV0.70410.54220.8271#2PV0.85250.73620.9229#3PV0.77380.65470.8605#4PV0.86620.69280.9490#est left right#1CSCP0.82040.60110.9327#2CSCP0.69960.61690.7710#3CSCP0.83220.72190.9046#4CSCP0.95730.56050.9975fulltable fulltable attaches margins and NA/NaN category to the output of ta-ble()Descriptionfulltable attaches margins and NA/NaN category to the output of table()Argumentssame as table()Valuesame as returned from table()Author(s)Hormuzd A.KatkiSee AlsotableExamples##The function is currently defined asfunction(...){##Purpose:Add the margins automatically and don t exclude NA/NaN as its own row/column ##and also add row/column titles.Works for mixed numeric/factor variables.##For factors,the exclude option won t include the NAs as columns,that s why ##I need to do more work.##----------------------------------------------------------------------##Arguments:Same as for table()##----------------------------------------------------------------------##Author:Hormuzd Katki,Date:5May2006,19:45#This works for purely numeric input,but not for any factors b/c exclude=NULL won t #include NAs for them.#return(addmargins(table(...,exclude=NULL)))###Factors are harder.I have to reconstruct each factor to include NA as a level###Put everything into a data framex<-data.frame(...)#For each factor(in columns),get the raw levels out,reconstruct to include NAs#That is,if there are any NAs--if none,add it as a level anywayfor(i in1:dim(x)[2]){if(is.factor(x[,i]))if(any(is.na(x[,i])))x[,i]<-factor(unclass(x[,i]),labels=c(levels(x[,i]),"NA"),exclude=NULL)elselevels(x[,i])<-c(levels(x[,i]),"NA")}#Make table with margins.Since NA is a level in each factor,they ll be included return(addmargins(table(x,exclude=NULL)))}specimens Fictitious data on specimens tested by two methodsDescriptionstdtest has been done on everyone,and sampledtest has been done on a stratifed subsample of275 out of402specimens(is NA on the other127specimens)Usagedata(specimens)FormatA data frame with402observations on the following3variables.stratum6strata used for samplingstdtest standard test result available on all specimenssampledtest new test result available only on stratified subsampleExamplesdata(specimens)Index∗datasetsspecimens,7∗packageCompareTests-package,2 CompareTests,3CompareTests-package,2fulltable,6specimens,79。

毕业设计——代码清单

毕业设计——代码清单

CENTRAL SOUTH UNIVERSITY 本科生毕业设计代码清单题目通用文本算法库的设计学生姓名赵扶摇指导老师余腊生学院信息科学与工程学院专业班级计算机科学与技术0610班完成时间 2010年6月目录第一章工程结构说明 (1)第二章源程序头文件 (2)Base.h (2)Queue.h (2)ExactMatchingS.h (3)Automata.h (4)ExactMatchingM.h (5)RegularExpression.h (5)SuffixArray.h (7)ExtendKMP.h (8)DES.H (8)DoubleHash.h (10)第三章源程序代码文件 (12)ExactMatchingS.cpp (12)ExactMatchingM.cpp (20)Automata.cpp (23)RegularExpression.cpp (26)SuffixArray.cpp (33)DES.cpp (39)DoubleHash.cpp (45)ExtendKMP.cpp (47)第一章工程结构说明整个工程由10个头文件以及8个源文件组成,可在Visual C++或者GCC下编译。

头文件中含有所有的函数声明,而源文件中则是所有函数的具体实现。

该工程的结构如下图所示:第二章源程序头文件Base.h//Base.h//头文件包含及基础数据结构,头文件//by 赵扶摇#include <iostream>#include <algorithm>#include <functional>#include <queue>#include <cstdio>#include <cstdlib>#include <cstring>#include <cmath>#include <list>#include <vector>#include <iostream>#include <fstream>using namespace std;#define MAX_RANGE 256#define MAX_PATTERN_LENGTH 1024#define MAX_TEXT_LENGTH 100000#define MAX_LOGN 20#define FAIL -1#define EPSILON 1#define MAX_REP 50#define MACHINE_WORD_LENGTH 32#define LOG_2 0.693147 //ln(2)的大小#define log_2(a) (int)(log((double)(a)) / LOG_2)#define newArray(type, p, size, val) {(p) = new type[(size)]; memset(p,(val),sizeof(type)*(size));}Queue.h//Queue.h//队列,头文件//by 赵扶摇#pragma oncetypedef struct {int curr;int parent;char label;} QueueNode; //专用队列节点#define EnQueue(Q) (&(Q[rear++])) //取得要记录的节点#define DeQueue(Q) (&(Q[front++]))#define EmptyQueue() (rear == front)#define InitQueue(Q, size) {Q = new QueueNode[(size)]; rear = front = 0;} ExactMatchingS.h//ExactMatchingS.h//多串精确模式匹配,头文件//by 赵扶摇#pragma once#include "Base.h"//Shift-And算法void ShiftAnd(char *t, char *p, int n, int m, int res[], int cnt);//ShiftOr算法void ShiftOr(char *T, char *P, int n, int m, int res[], int cnt);//暴力算法void Naive(char *t, char *p, int n, int m, int res[], int cnt);//RabinKarp算法void RabinKarp(char *t, char *p, int n, int m, int res[], int cnt);//更快的RabinKarp匹配版本void RabinKarpFaster(char *t, char *p, int n, int m, int res[], int cnt); //自动机的建立void Build_DFA(char *P, int m, int trans[][MAX_PATTERN_LENGTH]);//使用前向自动机的匹配算法void PrefixFiniteAutomata(char*T, char*P, int n, int m, int res[], int cnt); //KMP算法预处理函数void preKmp(char *P, int m, int kmpNext[]);//KMP算法void KMP(char *P, int m, char *T, int n, int res[], int cnt);//Horspool算法void Horspool(char *t, char *p, int n, int m, int res[], int cnt);//BOM算法void BOM(char *T, char *P, int n, int m, int res[], int cnt);//BNDM算法void BNDM(char *t, char *p, int n, int m, int res[], int cnt);//algo3算法void Algo3(char *T, char *P, int n, int m, int res[], int cnt); Automata.h//Automata.h//用于多串匹配的自动机,头文件//by 赵扶摇#pragma once#include "Base.h"#define TERMINAL 1#define NON_TERMINAL 0#define INITIAL_STATE 0#include "base.h"//自动机的结构体typedef struct Automata {int *matrix[MAX_RANGE];int vexnum;int *F[50]; //F[curr][1...m]表示终止状态所代表那些模式串,点存储着数量int *S;bool *terminal;char *next[MAX_RANGE]; //快速获得一个节点的邻接节点,而不是做range次的扫描char *first;} Automata;//取得trans[curr, c] 的转移#define getTrans(au, curr, c) a u.matrix[c][curr]//设置trans[curr, c] 的转移为state#define setTrans(au, curr, c, state) au.matrix[c][curr] = state//判断curr 的特征是否为结束#define isTerminal(au, curr) au.terminal[curr] != NON_TERMINAL//设置curr 的特征为结束#define setTerminal(au, curr) a u.terminal[curr] = TERMINAL//创建一个新的状态,返回其节点编号#define newState(au, s) (au.terminal[au.vexnum] = NON_TERMINAL,au.vexnum++)//F集合中的数量#define sizeFSet(au, curr) (au.F[0][(curr)])//增加集合集中的字符串#define addFSet(au, curr, i) (au.F[++sizeFSet(au, curr)][(curr)] = (i)) #define getFSet(au, curr, i) au.F[i][(curr)]//查找指定转移的标号void InitAutomata(Automata &au, int lmax, int r, int init = FAIL); //初始化自动机void DestroyAutomata(Automata &au);int Trie(char *P[], int m[], int r, Automata &au, bool rev = false); //构造Trie, rev表示是否倒排//P为模式串集合,m是模式串大小集合,r为模式串数量//F(q)[]中包含q所对应的P中的字符串//返回lminvoid ClearAutomata(Automata &au, int lmax, int r); ExactMatchingM.h//ExactMatchingM.h//多串精确模式匹配,头文件//by 赵扶摇#pragma once//AhoCorasick匹配算法(完全性自动机)void Aho_Corasick_Ad(char *T, char *P[], int n, int m[], int r, int res[][2], int cnt);//SBOM匹配算法void SBOM(char *T, char *P[], int n, int m[], int r, int res[][2], int cnt); RegularExpression.h//RegularExpression.h//正则表达式匹配,头文件//by 赵扶摇#pragma once#include "Base.h"//解析树的节点结构struct PaserTreeNode {char val;PaserTreeNode *lchild, *rchild;PaserTreeNode();PaserTreeNode(char letter, PaserTreeNode* left, PaserTreeNode *right); };//是否是正则表达式字符bool isRegLetter(char letter);//解析正则表达式PaserTreeNode *Parse(char *p, int &last);//状态struct State{struct Pair {State *next;char letter;Pair (char letter, State *next);};int id;bool isAcc;bool mark;list<Pair> transList;State();State(bool isAcc);void SetNFATrans(char letter, State *next);void SetDFATrans(char letter, State *next);State * GetDFATrans(char letter);list<State*> GetNFATrans(char letter);};//闭包集合struct Closure {list<int> stateSet;int hashValue;bool containTerminal;Closure();void Add(State *s);void Merge(const Closure &b);bool isEqual(Closure &b);};class NFA {private:void Identify(State *s, int &last);void setMark(State *s);//Tompson自动机的构造的递归过程void TompsonRecur(PaserTreeNode *v, State * &start, State * &terminal); public:State *start;int stateNumber;NFA();NFA (char *reg);//Tompson自动机的构造void ConstructTompson(char *reg);};class DFA {private:void Identify(State *s, int &last);vector<State *> stateIndex;//tempelyvector<State *> nStatesIndex;public:State *start;int stateNumber;//构造函数,从NFA转成DFADFA(State *NFA);//自动机的最小化void Minimize();//DFA匹配过程void DFAMatcher(char *text, int res[], int cnt);};SuffixArray.h//SuffixArray.h//头文件(包括Lempel-ziv压缩)//by 赵扶摇#include "Base.h"//s为原串,SA为后缀数组void SuffixArray(char * s, int n, int * SA, int * Rank);//O(n)时间内求height数组void GetLcp(char * s, int n, int * SA, int * Rank, int * &Hgt);//RMQ问题预处理void BuildSparseTable(int * A, int ST[][MAX_LOGN], int n);//返回待查数组中最小值的编号, 如果不能保证i<j, 需要加判断int RMQ(int ST[][MAX_LOGN], int i, int j);//suffix(i), suffix(j)的最长公共前缀长度int LCP(int ST[][MAX_LOGN], int * Rank, int n, int i, int j);//两个字符串的公共最长前缀int LCP(char *a, char *b);//一个串最长重复子串(至少出现k>1次)int LongestRepeatSubstringAtLeastK(char *s, int n, int k);//在后缀数组上进行二分搜索int BinarySearchSA(int *SA, int *lcp, int *rank, int ST[][MAX_LOGN], char *t, int n, char *p, int m);//最近最小元素计算void calNearestElement(int a[], int n, int leftElements[], int rightElements[]);//LEMPEL-ZIV分解算法void LempelZiv(char *s, int n, int LZ[], int k);ExtendKMP.h//ExtendKMP.h//扩展KMP及其应用,头文件//by 赵扶摇#include "Base.h"//下标位从开始, 保证最后为\0void PreExtKMP(char *p, int m, int *lcp);//res为t[i]和p的lcpvoid ExtKMP(char *t, int n, char *p, int m, int *lcp, int *res);//分治法,借助扩展KMP求平方子串数量void MainLorentz(char s[], int n, int &cnt);DES.H//DES.h//DES加密,头文件//by 赵扶摇#include "Base.h"class DES {public:int (*PermutedKey)[8];int *shiftList;int (*keyCompress)[8];int (*txttRightExpand)[8];int (*initialPermutation)[8];char (*sBox)[4][16];int (*permutationFunc)[8];int (*inversePermutation)[8];//保存个key,每个key有*8=48位,但是为了扩展与压缩的方便,我们这里用位字符进行存储,数据靠前存储char keys[16][8];//存放明文,左半部分前四个字节,右半部分后四个字节char txtt[8];char key[9];//标记是加密还是解密,1是加密,0是解密int decodeEncode;//定义文件流ifstream fileIn;ofstream fileOut;DES();virtual ~DES();//产生个key,void generateKey();//对于一个分块数据的处理进行加密,并存储到文件中去void txttCeil(ofstream *, bool);void solve();private://所有进行表的替代的操作的集中函数void substitution(int, int, char *, int (*)[8], char *, int, int, int);//移位函数,用于实现对于某个多字节整体的移位,是函数generateKey的一个附属函数void shiftKeyPar(int, int, char *);//函数用于s盒压缩void sBoxCompress(char *, char *);};DoubleHash.h//DoubleHash.h//双哈希应用//by 赵扶摇#include "Base.h"struct HashString { //倍增算法计算LCPstatic const int d1 = 3, d2 = 4;static const int q1 = 990001, q2 = 999883;long long hvalue1[MAX_TEXT_LENGTH][MAX_LOGN];long long hvalue2[MAX_TEXT_LENGTH][MAX_LOGN];char s[MAX_TEXT_LENGTH];int n;int logn;HashString(char str[]);//建立哈希表void BuildHashTable();//两个子串的公共最长前缀int LCP(int i, int j) ;//不同串的两个子串的公共最长前缀int LCP(int i, HashString &t, int j);//两个子串的字母序int Compare(int i, int leni, int j, int lenj);//不同串的两个子串的字母序int Compare(int i, int leni, HashString &t, int j, int lenj); };class LongestCommonSubstring{public:int n; //字符串的总数量char str[6][1000011]; //不同的字符串int len[6]; //字符串的长度#define q1 990001#define q2 999883#define CON 2#define MAX 1000000int hash[MAX][CON]; //这里理论上应该用链表的,不然就有水的嫌疑char cnt[MAX][CON]; //这里存的是一组哈希值是在第几个字符串被发现的bool ok;void Add(int v1, int v2, int l); //把两个hash值分别为v1,v2的字符串加入哈希表bool Check(int m); //寻找m长的公共最长子串};第三章源程序代码文件ExactMatchingS.cpp//ExactMatchingS.cpp//单串精确模式匹配//by 赵扶摇#include "Base.h"//Shift-And算法void ShiftAnd(char *t, char *p, int n, int m, int res[], int cnt) { //在字符集小及m较小的情况为优register unsigned int B[MAX_RANGE];int i;register unsigned int mask, D;cnt = 0;if (m > 32) return;//Preprocessingmemset(B, 0, sizeof(B));for (i = mask = 1; i <= m; i++, mask <<= 1)B[p[i]] |= mask;mask >>= 1;//SearchingD = 0;for (i = 1; i <= n; i++) {D = ((D << 1) | 1) & B[t[i]];if ((D & mask) != 0) res[cnt++] = i - m + 1;}}//ShiftOr算法void ShiftOr(char *T, char *P, int n, int m, int res[], int cnt) { //在字符集小及m较小的情况为优register unsigned int B[MAX_RANGE], mask;int i;register unsigned int lim, D;cnt = 0;//Preprocessingmemset(B, ~0, sizeof(B));mask = 1; lim = 0;for (i = 1; i <= m; i++, mask <<= 1) {B[P[i]] &= ~mask;lim |= mask;}lim = ~(lim >> 1);//SearchingD = ~0;i = 1;while (i <= n) {D = (D << 1) | B[T[i++]];if (D < lim) res[cnt++] = (i - m);}}//暴力算法void Naive(char *t, char *p, int n, int m, int res[], int cnt) { int i, j, k = 0;cnt = 0;for (i = 0; i <= n - m; i++) {for (j = 1; j <= m; j++)if (p[j] != t[i+j])break;if (j == m + 1)res[cnt++] = i + 1;}}//RabinKarp算法void RabinKarp(char *t, char *p, int n, int m, int res[], int cnt) { #define HASH(t, i, T) t = (d * t + T[i]) % q#define REHASH(t, i, T) t = ((d * (t - T[i]*h) + T[i+m]) % q);int d = 26;int q = 65537;int offset = q * d;int _t = 0;int _p = 0;int h;int i, j;cnt = 0;//Preprocessingfor (i = h = 1; i < m; i++) h = (h * d) % q;for (i = 1; i <= m; i++) {HASH(_p, i, p);HASH(_t, i, t);}//Searchingi = 0;while (i <= n - m) {if (_p == _t) {for (j = 1; j <= m && p[j] == t[i+j]; j++);if (j == m + 1) res[cnt++] = i + 1;}i++;//REHASH(_t, i, t);_t = (d * (_t - t[i] * h) + t[i + m]) % q;}}//更快的RabinKarp匹配版本void RabinKarpFaster(char *t, char *p, int n, int m, int res[], int cnt) { int _t = 0;int _p = 0;int h;int i, j;cnt = 0;//Preprocessingh = 1 << (m - 1);for (i = 1; i <= m; i++) {_p = (_p << 1) + p[i];_t = (_t << 1) + t[i];}//Searchingi = 0;while (i <= n - m) {if (_p == _t) {for (j = 1; j <= m && p[j] == t[i+j]; j++);if (j == m + 1) res[cnt++] = i + 1;}i++;_t = ((_t - t[i] * h) << 1) + t[i+m];}}void Build_DFA(char *P, int m, int trans[][MAX_PATTERN_LENGTH]) { int S[MAX_PATTERN_LENGTH + 1];int curr, down, i;int c;for (i = 0; i < MAX_RANGE; i++) trans[i][0] = 0;S[0] = FAIL;for (curr = 1; curr <= m; curr++) {c = P[curr];trans[c][curr - 1] = curr;down = S[curr - 1];while (down != FAIL && trans[c][down] == FAIL) //回溯down = S[down];if (down != FAIL)S[curr] = trans[c][down];else S[curr] = 0;for (int i = 0; i < MAX_RANGE; i++)if (trans[i][curr] == FAIL)trans[i][curr] = trans[i][S[curr]]; //完全化}}//使用前向自动机的匹配算法void PrefixFiniteAutomata(char *T, char *P, int n, int m, int res[], int cnt) {int i, curr;cnt = 0;int trans[MAX_RANGE][MAX_PATTERN_LENGTH];memset(trans, FAIL, sizeof(trans));//PreprocessingBuild_DFA(P, m, trans);//Searchingfor (i = 1, curr = 0; i <= n; i++) {curr = trans[T[i]][curr];if (curr == m)res[cnt++] = i - m + 1;}}void preKmp(char *P, int m, int kmpNext[]) {int i, j;i = 0;j = kmpNext[0] = -1;while (i < m) {while (j > -1 && P[i] != P[j])j = kmpNext[j];i++;j++;if (P[i] == P[j])kmpNext[i] = kmpNext[j];elsekmpNext[i] = j;}}//KMP算法void KMP(char *P, int m, char *T, int n, int res[], int cnt) { int i, j, kmpNext[MAX_PATTERN_LENGTH + 1];cnt = 0;/* Preprocessing */preKmp(P, m, kmpNext);/* Searching */i = j = 0;while (j < n) {while (i > -1 && P[i] != T[j])i = kmpNext[i];i++;j++;if (i >= m) {res[cnt++] = j - i;i = kmpNext[i];}}}//Horspool算法void Horspool(char *t, char *p, int n, int m, int res[], int cnt) { //在字符集小及m较小的情况为优int d[MAX_RANGE+1];int i, pos;cnt = 0;//Preprocessingfor (i = 0; i <= MAX_RANGE; i++)d[i] = m;for (i = 1; i <= m - 1; i++)d[p[i]] = m - i;//Searchingpos = 0;while (pos <= n - m) {i = m;while (i > 0 && t[pos+i] == p[i])i--;if (i == 0)res[cnt++] = pos + 1;pos += d[t[pos+m]];}}void Oracle_on_line(char *p, int m, int trans[][MAX_PATTERN_LENGTH]) { int S[MAX_PATTERN_LENGTH + 1];int k, s, j;int c;S[0] = FAIL;for (j = 0; j < m; j++) {c = p[m - j];trans[c][j] = j + 1;k = S[j];while (k != FAIL && trans[c][k] == FAIL) {trans[c][k] = j + 1;k = S[k];}if (k == FAIL) s = 0;else s = trans[c][k];S[j + 1] = s;}}//BOM算法void BOM(char *T, char *P, int n, int m, int res[], int cnt) { int i, pos, current;cnt = 0;int trans[MAX_RANGE][MAX_PATTERN_LENGTH];memset(trans, FAIL, sizeof(trans));//PreprocessingOracle_on_line(P, m, trans);//Searchingpos = 0;while (pos <= n - m) {current = 0;i = m;while (i > 0 && current != FAIL) {current = trans[T[pos + i]][current];i--;}if (current != FAIL)res[cnt++] = pos + 1;pos += i + 1;}}//BNDM算法void BNDM(char *t, char *p, int n, int m, int res[], int cnt) { //在字符集小及m较小的情况为优register unsigned int B[MAX_RANGE+1];int last, pos, i, temp;register unsigned int D;register int mask = 1 << (m - 1);int count = 0;if (m > 32) return;cnt = 0;//Preprocessingmemset(B, 0, sizeof(B));temp = 1;for (i = m; i > 0; i--) {B[p[i]] |= temp;temp <<= 1;}//Searchingpos = 0;while (pos <= n - m) {i = last = m;D = ~0;while (D) {D &= B[t[pos+i]];i--;if (D & mask) {if (i > 0) last = i;else res[cnt] = pos + 1;}D <<= 1;}pos += last;}}void algo1(char *T, int n, char *P, int m, int res[], int cnt) { int i, j;register unsigned int p = 0;register unsigned int t = 0;cnt = 0;register int beta = MACHINE_WORD_LENGTH / m;bool flag = MAX_RANGE < (1 << beta);//the mask to get the bit of a characterregister unsigned int get = (1 << beta) - 1;//the mask to eliminate the bits shifted higherregister unsigned int remove = ((long long)1 << (m * beta)) - 1;for (i = 0; i < m; i++) {p = (p << beta) | (P[i] & get);t = (t << beta) | (T[i] & get);}i = m;while (i <= n) {if (p == t) {if (flag) {res[cnt++] = i;} else {for (j = 0; j < m && P[j] == T[i + j - m]; j++);if (j == m) res[cnt++] = i;}}t = ((t << beta) & remove) | (T[i++] & get);}}//use the macro the define functions in algo2 keeps the 'beta' and 'get' to be implemented as constants#define algo2_macro(name, beta, get) \void name (char *T, int n, char *P, int m, int mm, int res[], int cnt) { \ int i, j, nn; \register unsigned int p = 0 , t = 0; \for (i = 0; i < mm; i++) { \p = (p << beta) | (P[i] & get); \t = (t << beta) | (T[i] & get); \} \nn = n - (m - mm); \i = mm; \while (i <= nn) { \if (p == t) { \for (j = 0; j < m && P[j] == T[i + j - mm]; j++); \if (j == m) res[cnt++]=(i - mm); \} \t = (t << beta) | (T[i++] & get); \} \} \//define functions used in algo2algo2_macro(algo2_b1, 1, 1)algo2_macro(algo2_b2, 2, 3)algo2_macro(algo2_b4, 4, 15)algo2_macro(algo2_b8, 8, 127)algo2_macro(algo2_b16, 16, 127)void algo2(char *T, int n, char *P, int m, int res[], int cnt) {int mm = 1 << log_2(m);int beta = MACHINE_WORD_LENGTH / mm;if (beta == 1) algo2_b1(T, n, P, m, mm, res, cnt);else if (beta == 2) algo2_b2(T, n, P, m, mm, res, cnt);else if (beta == 4) algo2_b4(T, n, P, m, mm, res, cnt);else if (beta == 8) algo2_b8(T, n, P, m, mm, res, cnt);else if (beta == 16) algo2_b16(T, n, P, m, mm, res, cnt);}//algo3算法void Algo3(char *T, char *P, int n, int m, int res[], int cnt) { T++; P++;cnt = 0;if ((MAX_RANGE == 2 && m < 8) || (MAX_RANGE < 7 && m < 4)) algo1(T, n, P, m, res, cnt);else algo2(T, n, P, m, res, cnt);}ExactMatchingM.cpp//ExactMatchingM.cpp//多串精确模式匹配//by 赵扶摇#include "Base.h"#include "ExactMatchingM.h"#include "Queue.h"#include "Automata.h"int Build_Oracle_Multiple(char *P[], int m[], int r, Automata &OR) { int curr, par, down, i;char c;int lmin = Trie(P, m, r, OR, true); //这个时间是很短的OR.S[0] = FAIL;int front, rear; //队列最大长度不超过QueueNode *queue, *state;front = rear = 0;InitQueue(queue, r*m[r] + 1);//以下主要求S这个供给链函数//先把点入队列state = EnQueue(queue);state->curr = 0; state->parent = 0;while (!EmptyQueue()) {state = DeQueue(queue); //获取当前状态curr = state->curr;par = state->parent;c = state->label;down = OR.S[par];while (down != FAIL && getTrans(OR, down, c) == FAIL) { //回溯//几乎就这与AC自动机不同setTrans(OR, down, c, curr);down = OR.S[down];}if (down != FAIL) //另一个区别是这没有奇怪的语句OR.S[curr] = getTrans(OR, down, c);else if (curr > 0) OR.S[curr] = 0;//重整队列for (i = OR.first[curr]; i != FAIL; i = OR.next[i][curr]) {state = EnQueue(queue);state->curr = getTrans(OR, curr, i); state->parent = curr;state->label = i;}}//delete queue;return lmin;}void SBOM(char *T, char *P[], int n, int m[], int r, int res[][2], int cnt) {int i, j, k, index, pos, curr, lmax = 0;Automata OR;for (i = 0; i < r; i++) lmax = max(lmax, m[i]);InitAutomata(OR, m[i], r);int lmin = Build_Oracle_Multiple(P, m, r, OR);cnt = 0;pos = 0;while (pos <= n - lmin) {curr = 0;i = lmin;while (curr != FAIL && i >= 1) {curr = getTrans(OR, curr, T[pos+i]);i--;}if (curr != FAIL) {for (j = 1; j <= sizeFSet(OR, curr); j++) {index = getFSet(OR, curr, j); //获取匹配的模式串编号for (k = 1; k <= m[index]; k++)if (P[index][k] != T[pos+k]) break;if (k > m[index]) {res[cnt][0] = index;res[cnt][1] = pos + 1;}}}pos += (i + 1);}}void Build_AC_Ad(char *P[], int m[], int r, Automata &AC) { int curr, par, down;char c;Trie(P, m, r, AC);AC.S[0] = FAIL;int front, rear; //队列最大长度不超过PATTERN_LENGTHQueueNode *queue, *state;front = rear = 0;InitQueue(queue, r*m[r] + 1);//以下主要求S这个供给链函数//先把点入队列state = EnQueue(queue);state->curr = 0; state->parent = 0;while (!EmptyQueue()) {state = DeQueue(queue); //获取当前状态curr = state->curr; par = state->parent; c = state->label; down = AC.S[par];while (down != FAIL && getTrans(AC, down, c) == FAIL) //回溯down = AC.S[down];if (down != FAIL) {AC.S[curr] = getTrans(AC, down, c); //我认为古怪的事情终究还是发生啦,哈哈哈if (isTerminal(AC, AC.S[curr])) {setTerminal(AC, curr);for (int i = 1; i <= sizeFSet(AC, AC.S[curr]); i++) //将S[curr]的F并入addFSet(AC, curr, getFSet(AC, AC.S[curr], i));}} else if (curr > 0) AC.S[curr] = 0;//重整队列for (int i = 0; i < MAX_RANGE; i++) {if (getTrans(AC, curr, i) != FAIL) {state = EnQueue(queue);state->curr = getTrans(AC, curr, i); state->parent = curr;state->label = i;} else if (curr > 0) setTrans(AC, curr, i, getTrans(AC,AC.S[curr], i)); //完全化else setTrans(AC, 0, i, 0);}//队列重整完毕}//delete queue;}void Aho_Corasick_Ad(char *T, char *P[], int n, int m[], int r, int res[][2], int cnt) {//完全的自动机int curr, pos, i, j;Automata AC;cnt = 0;InitAutomata(AC, m[r], r);//ProprocessingBuild_AC_Ad(P, m, r, AC);//Searchingcurr = 0;for (pos = 1; pos <= n; pos++) {curr = getTrans(AC, curr, T[pos]);if (isTerminal(AC, curr))for (i = 1; i <= sizeFSet(AC, curr); i++) {j = getFSet(AC, curr, i); //获取匹配的模式串编号res[cnt][0] = j;res[cnt++][1] = pos - m[j] + 1;}}}Automata.cpp#include "Automata.h"Automata au; //不考虑内存分配时间构造int *matrix;char *next;int *F;void InitAutomata(Automata &au, int lmax, int r, int init) { //init为各状态初始的值,默认为FAILau.S = new int[r*lmax+1];au.terminal = new bool[r*lmax+1];newArray(char, au.first, r*lmax + 1, FAIL);newArray(int, matrix, (r*lmax + 1)*MAX_RANGE, FAIL);newArray(char, next, (r*lmax + 1)*MAX_RANGE, FAIL);newArray(int, F, (r*lmax + 1)*MAX_REP, 0);}void DestroyAutomata(Automata &au) {delete matrix;delete next;delete F;delete au.first;delete au.S;delete au.terminal;}int Trie_SimpleRev(char *P[], int m[], int r, Automata &au) { //P为模式串集合,m是模式串大小集合,r为模式串数量//F(q)[]中包含q所对应的P中的字符串,这里认为不可能用两个相同的串int i, j, curr, state, trans;char *last;int lmin = 65536;newArray(char, last, r*m[r] + 1, FAIL);newState(au, NON_TERMINAL);//Create an initial non terminal state 0 for (i = 1; i <= r; i++) {lmin = min(lmin, m[i]);j = m[i];curr = 0;while (j >= 1 && (trans = getTrans(au, curr, P[i][j])) != FAIL) { curr = trans;j--;} //略过公共的部分while (j >= 1) {state = newState(au, NON_TERMINAL); //state为转移到的下一个状态setTrans(au, curr, P[i][j], state);//如果该状态还无任何转移if (au.first[curr] == FAIL) au.first[curr] = P[i][j];else au.next[last[curr]][curr] = P[i][j];last[curr] = P[i][j]; //为curr最新创建的转移curr = state;j--;} //添加新的转移if (!isTerminal(au, curr))setTerminal(au, curr);addFSet(au, curr, i);}delete last;return lmin;}int Trie(char *P[], int m[], int r, Automata &au, bool rev) { if (rev)return Trie_SimpleRev(P, m, r, au);//P为模式串集合,m是模式串大小集合,r为模式串数量//F(q)[]中包含q所对应的P中的字符串,这里认为不可能用两个相同的串int i, j, curr, state, trans;char *last;newArray(char, last, r*m[r] + 1, FAIL);newState(au, NON_TERMINAL);//Create an initial non terminal state 0for (i = 1; i <= r; i++) {curr = 0;j = 1;while (j <= m[i] && (trans = getTrans(au, curr, P[i][j])) != FAIL) { curr = trans;j++;} //略过公共的部分while (j <= m[i]) {state = newState(au, NON_TERMINAL); //state为转移到的下一个状态setTrans(au, curr, P[i][j], state);//如果该状态还无任何转移if (au.first[curr] == FAIL) au.first[curr] = P[i][j];else au.next[last[curr]][curr] = P[i][j];last[curr] = P[i][j]; //为curr最新创建的转移curr = state;j++;} //添加新的转移if (!isTerminal(au, curr))setTerminal(au, curr);addFSet(au, curr, i);}delete last;return 0;}RegularExpression.cpp//RegularExpression.cpp//正则表达式匹配//by 赵扶摇#include "RegularExpression.h"PaserTreeNode::PaserTreeNode() {val = -1;lchild = rchild = NULL;}PaserTreeNode::PaserTreeNode(char letter, PaserTreeNode* left, PaserTreeNode *right) {val = letter;lchild = left;rchild = right;}bool isRegLetter(char letter) {return isdigit(letter) || isalpha(letter);}//解析正则表达式PaserTreeNode *Parse(char *p, int &last) {PaserTreeNode *v = NULL, *vr;while (p[last] != '\0') {if (isRegLetter(p[last]) || p[last] == EPSILON) {vr = new PaserTreeNode(p[last], NULL, NULL);if (v != NULL) v = new PaserTreeNode('.', v, vr);else v = vr;last++;} else if (p[last] == '|') {last++;vr = Parse(p, last);v = new PaserTreeNode('|', v, vr);} else if (p[last] == '*') {v = new PaserTreeNode('*', v, NULL);last++;} else if (p[last] == '(') {last++;vr = Parse(p, last);last++; //skip ')'if (v != NULL) v = new PaserTreeNode('.', v, vr);else v = vr;} else if (p[last] == ')')break;}return v;}State::Pair::Pair (char letter, State *next) {this->letter = letter;this->next = next;}State::State() {mark = isAcc = false;id = -1;}State::State(bool isAcc) {mark = false;this->isAcc = isAcc;id = -1;}void State::SetNFATrans(char letter, State *next) {transList.push_back(Pair(letter, next));}void State::SetDFATrans(char letter, State *next) {for (list<Pair>::iterator it = transList.begin(); it != transList.end(); it++) {if (it->letter == letter) {it->next = next;return;}}transList.push_back(Pair(letter, next));}State * State::GetDFATrans(char letter) {for (list<Pair>::iterator it = transList.begin(); it != transList.end(); it++) {if (it->letter == letter)return it->next;}return NULL;}list<State*> State::GetNFATrans(char letter) {list<State*> res;for (list<Pair>::iterator it = transList.begin(); it != transList.end(); it++) {if (it->letter == letter)res.push_back(it->next);}return res;}Closure::Closure() {hashValue = 0;containTerminal = false;}void Closure::Add(State *s) {stateSet.push_back(s->id);containTerminal |= s->isAcc;hashValue += s->id;}void Closure::Merge(const Closure &b) {stateSet.insert(stateSet.end(), b.stateSet.begin(), b.stateSet.end());containTerminal |= b.containTerminal;hashValue += b.hashValue;}bool Closure::isEqual(Closure &b) {if (stateSet.size() != b.stateSet.size() || hashValue != b.hashValue) return false;stateSet.sort();b.stateSet.sort();for(list<int>::iterator it = stateSet.begin(), itb = b.stateSet.begin(); it != stateSet.end(); it ++, itb++)if (*it != *itb) return false;return true;}void NFA::Identify(State *s, int &last) {if (s->mark == true) return;s->id = last++;s->mark = true;for (list<State::Pair>::iterator it = s->transList.begin(); it !=s->transList.end(); it++) {Identify(it->next, last);}}void NFA::setMark(State *s) {if (s->mark == false) return;s->mark = false;for (list<State::Pair>::iterator it = s->transList.begin(); it !=s->transList.end(); it++) {setMark(it->next);。

卡罗需-库恩-塔克条件判断约束极值点的应用方法1

卡罗需-库恩-塔克条件判断约束极值点的应用方法1
李春明
中国石油大学(华东)机电工程学院,山东东营(257061)
E-mail: Lchming@ , mingming000111@
摘 要:卡罗需-库恩-塔克(KKT)条件作为判断最优点是否为约束极值点的依据,在优化 算法中非常重要。本文针对数值算法中遇到的几种情况提出了该条件的应用方法,给出了程 序流程图。在运用该条件前,须剔除起作用约束中的冗余约束。对于起作用约束数大于维数 的情况,须取所有基本梯度组进行检查,只要一组的拉格朗日乘子均非负,则考察点满足 KKT 条件。对于起作用约束数小于维数的情况,须取部分方程求出拉格朗日乘子,再用其 它方程检验。以实例说明了该应用方法的计算步骤。 关键词:优化方法;KKT 条件;冗余约束;约束极值点 中图分类号:O224,TH122
u =1
v =1
(1)
判断某边界点 x*是否为约束极值点的 KKT 条件为:
( ) ∑ ( ) ∑ ( ) ⎧⎪∇f
x*
m
− λu*∇ gu
x*

l
µv*∇ hv
x*
=0
(2)

u =1
v =1
( ) ⎨⎪λu*gu x* =0
(3)
( ) ⎪⎩λu* ≥ 0; hv x* = 0
其中, λ*u 、 µv* 为拉格朗日乘子, λ*u 对应于不等式约束,须不小于 0, µv* 对应于等式
Tab. 1 Lagrangian multipliers of each grads group
基本梯度组
λ1
λ2
λ3
{ } ( ) ( ) ( ) Z1 = ∇ g1 x* ,∇ g2 x* ,∇ g3 x*
3
31
−1

A 3-D Perfectly Matched Medium from Modified Maxwell's Equations with Stretched Coordinates

A 3-D Perfectly Matched Medium from Modified Maxwell's Equations with Stretched Coordinates
(SIMD) massively parallel computer. The reason is that the stencil operations that must be computed at each node of the space grid involve only nearest-neighbor interactions and may be implemented at a minimum communication cost 4]. A major challenge, however, is in implementing absorbing boundary conditions (ABCs) at the edges of the FDTD grid. On scalar and vector computers, these boundary conditions are typically computed using methods such as the Engquist-Majda 5], Mur 6], Liao 7] or Higdon 8] ABC. However, these methods are not ideal for parallel supercomputers since they all involve communication with many elements normal to the grid boundary. Such communication can easily surpass the time spent computing core FDTD operations in the grid interior, especially for higher-order boundary conditions, and hence can become a bottleneck in the FDTD code. Also, they do not allow for SIMD operation on a parallel machine without the use of masking.

ExactMcNemar’sTe...

ExactMcNemar’sTe...

pc = min 1, 2 ∗ min F (x), F¯(x)
For either design, we can estimate the odds ratio by b/c, which is the maximum likelihood estimate (see Breslow and Day, 1980, p. 165).
Consider some hypothetical data (chosen to highlight some points):
McNemar's Chi-squared test
data: x McNemar's chi-squared = 4.4545, df = 1, p-value = 0.03481
Since the inferences change so much, and are on either side of the traditional 0.05 cutoff of significance, it would be nice to have an exact version of the test to be clearer about significance at the 0.05 level. We study that in the next section.
1
Control Fail Pass
Test Fail Pass 21 9
2 12
When we perform McNemar’s test with the continuity correction we get
> x <- matrix(c(21, 9, 2, 12), 2, 2) > mcnemar.test(x)

scipy coo_matrix 条件数 -回复

scipy coo_matrix 条件数 -回复

scipy coo_matrix 条件数-回复Scipy is a powerful library in Python for scientific and technical computing. It provides a variety of optimization, interpolation, integration, linear algebra, and numerical routines, making it a popular choice for data scientists and engineers. Among its many functionalities, Scipy offers the coo_matrix class, which allows for efficient sparse matrix computations. In this article, we will delve into the concept of condition numbers and discuss how it applies to the coo_matrix in Scipy.Before we dive into understanding the condition number of a matrix, let's first briefly understand what a sparse matrix is. A sparse matrix is a matrix that contains a significant number of zero elements, compared to the total number of elements in the matrix. Sparse matrices are commonly encountered in practical applications where data is inherently sparse, such as network connectivity, text analysis, and image processing.The coo_matrix class in Scipy represents a sparse matrix using three arrays: `row`, `col`, and `data`. The `row` array contains the row indices of the non-zero elements, the `col` array contains the column indices of the non-zero elements, and the `data` arraystores the corresponding non-zero values. This data structure makes it efficient to perform operations on sparse matrices, as only the non-zero elements need to be stored and processed.Now, let's move on to the concept of the condition number of a matrix. The condition number is used to measure the sensitivity of a linear system to changes in the input. In the context of matrices, the condition number provides information about how much the output of a matrix equation can vary due to small changes in the input.For example, let's consider the equation Ax=b, where A is an invertible matrix and x and b are vectors. The condition number of a matrix A, denoted as cond(A), can be defined as the maximum ratio of the relative change in the output vector to the relative change in the input vector, over all possible input vectors. In mathematical terms, cond(A) = A ⋅A⁻¹ , where A is the norm of the matrix A and A⁻¹ is the norm of its inverse.The condition number of a matrix can provide useful insights into the stability and accuracy of numerical algorithms. In particular, matrices with high condition numbers are said to be ill-conditioned, meaning that small changes in the input can lead to large changes in the output. On the other hand, matrices with low condition numbers are said to be well-conditioned, indicating that the output is relatively insensitive to small input perturbations.Now, let's examine how the concept of condition numbers applies to the coo_matrix in Scipy. Since the coo_matrix class is primarily designed for sparse matrices, it is important to note that the condition number is not directly available as a method or attribute of this class. However, we can still compute the condition number indirectly by converting a coo_matrix to a dense matrix representation.To compute the condition number of a matrix, we need to compute the norm of the matrix and the norm of its inverse. In Scipy, we can use the `norm` function from the `scipy.linalg` module to compute the norm of a matrix. Specifically, we can compute the 2-norm of a matrix using the `norm` function with the parameter `ord=2`. Similarly, we can compute the inverse of a matrix using the `inv` function from the `scipy.linalg` module.Here's an example to illustrate how to compute the conditionnumber of a coo_matrix in Scipy:pythonimport numpy as npfrom scipy.linalg import norm, invfrom scipy.sparse import coo_matrix# Create a random coo_matrixsparse_matrix = coo_matrix(np.random.random((100, 100)))# Convert the coo_matrix to a dense matrixdense_matrix = sparse_matrix.toarray()# Compute the condition numbercondition_number = norm(dense_matrix) *norm(inv(dense_matrix))print("Condition number:", condition_number)In this example, we first create a random coo_matrix of size 100x100. Then, we convert the coo_matrix to a dense matrixrepresentation using the `toarray` method. Finally, we compute the condition number of the dense matrix using the `norm` function and the `inv` function.It is important to note that computing the condition number of a matrix can be computationally expensive, especially for large matrices. In such cases, it may be more practical to use approximate methods or compute an estimate of the condition number instead.In conclusion, the coo_matrix class in Scipy provides efficient sparse matrix computations, but it does not directly offer a method or attribute to compute the condition number. However, we can still compute the condition number indirectly by converting the coo_matrix to a dense matrix representation and using appropriate functions from the scipy.linalg module. Understanding the condition number can help us assess the stability and accuracy of numerical algorithms when working with sparse matrices in Scipy.。

贴片电阻电容功率与尺寸对应表

贴片电阻电容功率与尺寸对应表

贴片电阻电容功率与尺寸对应表目录: 一、贴片电阻二、贴片电容三、贴片钽电容一、贴片电阻2007-12-18 16:41贴片电阻电容功率与尺寸对应表电阻封装尺寸与功率关系,通常来说: 0201 1/20W0402 1/16W0603 1/10W0805 1/8W1206 1/4W电容电阻外形尺寸与封装的对应关系是: 0402=1.0x0.5 0603=1.6x0.80805=2.0x1.21206=3.2x1.61210=3.2x2.51812=4.5x3.22225=5.6x6.5常规贴片电阻(部分)常规的贴片电阻的标准封装及额定功率如下表:英制(mil) 公制(mm) 额定功率(W)@ 70?C 0201 0603 1/20 0402 1005 1/160603 1608 1/100805 2012 1/81206 3216 1/41210 3225 1/31812 4832 1/22010 5025 3/42512 6432 1国内贴片电阻的命名方法:1、5%精度的命名:RS-05K102JTprestar atenció especial a les parts el reforç negatiu. C transport descarregaven el llum llum atenció, no tirs lliures. Barres D modelat posat no s'utilitza durant molt de temps l'interior d'apilament coixí bé per evitar la corrosió. 7.3.3 ? d'acer fl ash cul soldadura soldada productes semielaborats acer classificades segons especificacions un tidy, abocadors tot cobert, evitar la insolació i la pluja. Acer de transport B per a la soldadura de productes semielaborats cal no ser expulsat, per evitar la deformació d'acer. ? soldadura de productes semielaborats no aigua refredament, refredament pot només mogut. 7.47.4.1 general ? durant el formigó abocament de formigó Enginyeria projectes concrets, correcció reservat de la localització d'acer o canonada. B després que s'ha vessat llosa de formigó força a 1.2Mpanomés a la planta. Plantilla de costat C en força de formigó potgarantir que les vores no estan danyats per despullar, abans de despullar. ? Quan s'utilitza un vibrador, aneu amb compte de no tocarl'acer i enterrats, incrustats cargols, tub, com la correcció de deformació en una conducta oportuna. Pluja ? durant la construcció pluja suficient mesures de protecció per a cobrir les parts que han estat abocant, període de pluja, evitar l'aire lliure. 7.4.2 ? de mercaderia mescladora de ciment per la càrrega quantitativa, sobrecàrrega no està permès per evitar la pèrdua dels purins de ciment. ? Mesclador hauria de comprovar abans de l'acompliment de barreja de formigó, no superarà el temps de l'escena i nicial. C reduir els les fissures per contracció, formigó superfícies, com ara quan hi ha sense taques d'aigua, una segona pressió.2、1,精度的命名:RS-05K1002FTR ,表示电阻S ,表示功率0402是1/16W、0603是1/10W、0805是1/8W、1206是1/4W、1210是1/3W、1812是1/2W、2010是3/4W、2512是1W。

nabor包的说明说明书

nabor包的说明说明书

Package‘nabor’October13,2022Type PackageTitle Wraps'libnabo',a Fast K Nearest Neighbour Library for LowDimensionsVersion0.5.0Author Stephane Mangenat(for'libnabo'),Gregory JefferisMaintainer Gregory Jefferis<******************>Description An R wrapper for'libnabo',an exact or approximate k nearest neighbour library which is optimised for low dimensional spaces(e.g.3D).'libnabo'has speed and space advantages over the'ANN'library wrapped by package'RANN'.'nabor'includes a knn function that is designed as adrop-in replacement for'RANN'function nn2.In addition,objects whichinclude the k-d tree search structure can be returned to speed up repeatedqueries of the same set of target points.License BSD_3_clause+file LICENSECopyright libnabo is copyright2010--2011,Stephane Magnenat,ASL, ETHZ,Switzerland<stephane at magnenat dot net>URL https:///jefferis/naborhttps:///ethz-asl/libnaboBugReports https:///jefferis/nabor/issuesDepends R(>=3.0.2)Imports Rcpp(>=0.11.2),methodsLinkingTo Rcpp,RcppEigen(>=0.3.2.2.0),BH(>=1.54.0-4)Suggests testthat,RANNRoxygenNote6.0.1NeedsCompilation yesRepository CRANDate/Publication2018-07-1116:00:02UTC12kcpoints R topics documented:nabor-package (2)kcpoints (2)knn (3)WKNNF-class (4)Index7 nabor-package Wrapper for libnabo K Nearest Neighbours C++libraryDescriptionR package nabor wraps the libnabo library,a fast K Nearest Neighbour library for low-dimensional spaces written in templated C++.The package provides both a standalone function(see knn for basic queries along an option to produce an object containing the k-d tree search(see WKNN)structure when making multiple queries against the same target points.Detailslibnabo uses the same approach as the ANN library(wrapped in R package RANN)but is generally faster and with a smaller memory footprint.Furthermore since it is templated on the underlying scalar type for coordinates(among other things),we have provided bothfloat and double coordinate implementations of the classes wrapping the search tree structures.See the github repository and Elsenberg et al paper below for details.ReferencesElseberg J,Magnenat S,Siegwart R and Nuechter A(2012)."Comparison of nearest-neighbor-search strategies and implementations for efficient shape registration."_Journal of Software Engi-neering for Robotics(JOSER)_,*3*(1),pp.2-12.ISSN2035-3928.See Alsoknn,WKNNkcpoints List of3matrices containing3D points from Drosophila neuronsDescriptionThis R list contains3skeletonized Drosophila Kenyon cells as dotprops objects.Original data is due to Chiang et al.2011,who have generously shared their raw data at http://flycircuit.tw.Image registration and further processing was carried out by Greg Jefferis.knn3 References[1]Chiang A.S.,Lin C.Y.,Chuang C.C.,Chang H.M.,Hsieh C.H.,Yeh C.W.,Shih C.T.,Wu J.J.,Wang G.T.,Chen Y.C.,Wu C.C.,Chen G.Y.,Ching Y.T.,Lee P.C.,Lin C.Y.,Lin H.H.,Wu C.C., Hsu H.W.,Huang Y.A.,Chen J.Y.,et al.(2011).Three-dimensional reconstruction of brain-wide wiring networks in Drosophila at single-cell resolution.Curr Biol21(1),1–11.knn Find K nearest neighbours for multiple query pointsDescriptionFind K nearest neighbours for multiple query pointsUsageknn(data,query=data,k,eps=0,searchtype=1L,radius=0)Argumentsdata Mxd matrix of M target points with dimension dquery Nxd matrix of N query points with dimension d(nb data and query must have same dimension).If missing defaults to data i.e.a self-query.k an integer number of nearest neighbours tofindeps An approximate error bound.The default of0implies exact matching.searchtype A character vector or integer indicating the search type.The default value of1L is equivalent to"auto".See details.radius Maximum radius search bound.The default of0implies no radius bound. DetailsIf searchtype="auto",the default,knn uses a k-d tree with a linear heap when k<30nearest neighbours are requested(equivalent to searchtype="kd_linear_heap"),a k-d tree with a tree heap otherwise(equivalent to searchtype="kd_tree_heap").searchtype="brute"checks all point combinations and is intended for validation only.Integer values of searchtype should be the1-indexed position in the vector c("auto","brute", "kd_linear_heap","kd_tree_heap"),i.e.a value between1L and4L.The underlying libnabo does not have a signalling value to identify indices for invalid query points(e.g.those containing an NA).In this situation,the index returned by libnabo will be0and knn willtherefore return an index of1.However the distance will be Inf signalling a failure tofind a nearest neighbour.When radius>0.0and no point is found within the search bound,the index returned will be0but the reported distance will be Inf(in contrast RANN::nn2returns1.340781e+154).ValueA list with elements nn.idx(1-indexed indices)and nn.dists(distances),both of which are N xk matrices.See details for the results obtained with1invalid inputs.Examples##Basic usage#load sample data consisting of list of3separate3d pointetsdata(kcpoints)#Nearest neighbour in first pointset of all points in second pointsetnn1<-knn(data=kcpoints[[1]],query=kcpoints[[2]],k=1)str(nn1)#5nearest neighboursnn5<-knn(data=kcpoints[[1]],query=kcpoints[[2]],k=5)str(nn5)#Self match within first pointset,all distances will be0nnself1<-knn(data=kcpoints[[1]],k=1)str(nnself1)#neighbour2will be the nearest pointnnself2<-knn(data=kcpoints[[1]],k=2)##Advanced usage#nearest neighbour with radius boundnn1.rad<-knn(data=kcpoints[[1]],query=kcpoints[[2]],k=1,radius=5)str(nn1.rad)#approximate nearest neighbour with10%error boundnn1.approx<-knn(data=kcpoints[[1]],query=kcpoints[[2]],k=1,eps=0.1)str(nn1.approx)#5nearest neighbours,brute force searchnn5.b<-knn(data=kcpoints[[1]],query=kcpoints[[2]],k=5,searchtype= brute )stopifnot(all.equal(nn5.b,nn5))#5nearest neighbours,brute force search(specified by int)nn5.b2<-knn(data=kcpoints[[1]],query=kcpoints[[2]],k=5,searchtype=2L)stopifnot(all.equal(nn5.b2,nn5.b))WKNNF-class Wrapper classes for k-NN searches enabling repeated queries of thesame treeDescriptionWKNNF and WKNND are reference classes that wrap C++classes of the same name that include a space-efficient k-d tree along with the target data points.They have query methods with exactly the sameinterface as the knn function.One important point compared with knn-they must be intialised with floating point data and you are responsible for this-see storage.mode)and the example below.DetailsWKNNF expects and returns matrices in R’s standard(double,8bytes)data type but usesfloats inter-nally.WKNND uses doubles throughout.When retaining large numbers of points,the WKNNF objects will have a small memory saving,especially if tree building is delayed.The constructor for WKNN objects includes a logicalflag indicating whether to build the tree immediately(default:TRUE)or(when FALSE)to delay building the tree until a query is made(this happens automatically when required).PerformanceThe use of WKNN objects will incur a performance penalty for single queries of trees with<~1000 data points.This is because of the overhead associated with the R wrapper class.It therefore makes sense to use knn in these circumstances.If you wish to make repeated queries of the same target data,then using WKNN objects can give significant advantages.If you are going to make repeated queries with the same set of query points (presumably against different target data),you can obtain benefits in some cases by converting the query points into WKNNF objects without building the trees.See AlsoknnExamples##Basic usage#load sample data consisting of list of3separate3d pointetsdata(kcpoints)#build a tree and query it with two different sets of pointsw1<-WKNNF(kcpoints[[1]])w1q2<-w1$query(kcpoints[[2]],k=5,eps=0,radius=0)str(w1q2)w1q3<-w1$query(kcpoints[[3]],k=5,eps=0,radius=0)#note that there will be small difference between WKNNF and knn due to loss#of precision in the double to float conversion when a WKNNF tree is#built and queried.stopifnot(all.equal(knn(data=kcpoints[[1]],query=kcpoints[[2]],k=5,eps=0,radius=0),w1q2,tolerance=1e-6))##storage mode:must be doublem=matrix(1:24,ncol=3)storage.mode(m)#this will generate an error unless we change to a doublew=tools::assertCondition(WKNND(m),"error")storage.mode(m)="double"w=WKNND(matrix(m,ncol=3))##construct wrapper objects but delay tree constructionw1<-WKNNF(kcpoints[[1]],FALSE)#query triggers tree constructionw1q2<-w1$query(kcpoints[[2]],k=5,eps=0,radius=0)str(w1q2)##queries using wrapper objectswkcpoints<-lapply(kcpoints,WKNNF,FALSE)#query all3point sets against first#this will trigger tree construction only for pointset1qall<-lapply(wkcpoints,function(x)wkcpoints[[1]]$queryWKNN(x$.CppObject,k=5,eps=0,radius=0)) str(qall)Indexkcpoints,2knn,2,3,5nabor(nabor-package),2nabor-package,2storage.mode,5WKNN,2WKNN(WKNNF-class),4WKNND(WKNNF-class),4WKNND-class(WKNNF-class),4WKNNF(WKNNF-class),4WKNNF-class,47。

if判断语句后,可以没有elif和else.

if判断语句后,可以没有elif和else.

if判断语句后,可以没有elif和else.if 语句后面可以跟一个可选的else语句,当布尔表达式为假该语句执行。

语法代码如下:if(boolean_expression)then--[ statement(s) will execute if the boolean expression is true --]else--[ statement(s) will execute if the boolean expression is false --]end如果布尔表达式的值为true,那么if代码块将被执行,否则else代码块将被执行。

lua程序设计语言假设布尔true和非零值的任一女团做为true,以及它是否是布尔骗人或零,则假设为false值。

但应特别注意的就是,在lua零值被视作true。

例如:代码如下:--[ local variable definition --]a = ;--[ check the boolean condition --]if( a < 20 )then--[ if condition is true then print the following --]print("a is less than 20" )else--[ if condition is false then print the following --]print("a is not less than 20" )endprint("value of a is :", a)当建立和运行上面的代码,它会产生以下结果。

代码如下:a is not less than 20value of a is :if语句后面可以跟一个可选的else if ... else语句,这是非常有用的使用,以测试各种条件单个if...else if 语句。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
g (Q) g (µ)
(6)
γ dg β
,
(7)
where the QCD β -function is also scheme-dependent. From the above formulas and the fact that M is scheme-independent, we derive a relation between the matrix elements in two different schemes, C (1, αs (Q2 )) exp −
Typeset using REVTEX
work is supported in part by funds provided by the U.S. Department of Energy (D.O.E.) under cooperative agreement #DF-FC02-94ER40818.
As indicated early, the difference between the matrix elements in any two schemes arises from the ultraviolet region. Thus there must exist an all-order perturbative relation between the matrix elements calculated in the two schemes. In this note, I shall derive such a relation using renormalization group arguments. Consider a physics observable M (Q2 ) where Q2 is a hard momentum scale of a physical process. M (Q2 ) could be a moment of some deep-inelastic structure function. According to the operator-product expansion, we write, M (Q2 ) = C (Q2 /µ2 , αs (µ2 ))A(µ2 ) . (4)
latt MS
=
k |O |k k |O |k
latt MS
=1+
g2 (γ ln a2 µ2 + c − c′ ) . 2 4π
(3)
In the second equality, an expansion in g 2 is made. In order to cancel the infrared cutoff p2 , one has to identity g0 (a) and g (µ). However, after the cancellation, one does not know which g should be used in the above equation. Due to the large tad-pole contributions, the difference between coupling constants in the two schemes is significant in the same momentum region. Furthermore, it is not clear how to generalize the above relation to multi-loops. 2
where C is the coefficient function, calculable in perturbative theory. C is clearly schemedependent, and let us assume a scheme has been chosen for its calculation. A is a soft hadron matrix element defined in the same scheme. The µ dependence of A satisfies the renormalization group equation, dA(µ) + γ (µ)A(µ) = 0 , dµ (5)
∗ This
1
Presently, lattice QCD provides the unique method with controlled approximation to compute hadron properties directly from the QCD lagrangian. In the last few years, a number of groups have calculated on the lattice an impressive list of hadron matrix elements, ranging from the axial and scalar charges of the nucleon to lower-order moments of deep-inelastic structure functions [1–3]. Note, however, that most of the hadron matrix elements are not directly physical observables. In field theory, apart from the S-matrix, physical observables are related to symmetry generators of the lagrangian, such as the vector and axial-vector currents or hadron masses. Nonetheless, hadron matrix elements are useful intermediate quantities to express physical observables. Being intermediate, they often depend on specific definitions in particular context. Or in field theory jargon, they are scheme-dependent. Since schemes are generally introduced to eliminate ultraviolet divergences in composite operators, the scheme dependence of a matrix element is in fact perturbative in asymptotically-free QCD. Understanding scheme dependence has important practical values. In calculating hadron matrix elements on a lattice, one is automatically limited to the lattice scheme. On the other hand, hadron matrix elements entering physical cross sections are often defined in connection with perturbation theory. The best scheme for doing perturbation theory is not the lattice QCD, because the lattice has complicated Feynman rules and accommodates only Euclidean Green’s functions. The most popular scheme for perturbative calculations is the dimensional regularization introduced by t’ Hooft and Veltman more than two decades ago, followed by the (modified) minimal subtraction (MS). A popular practice currently adopted in the literature for matching the matrix elements in the lattice and MS schemes goes like this [1,4]. Consider, for instance, a quark operator O . First, the one-loop matrix element of O in a single quark state |k is calculated on the lattice, k |O |k and in the MS scheme, k |O |k
EXACT MATCHING CONDITION FOR MATRIX ELEMENTS IN LATTICE AND MS SCHEMES
Xiangdong Ji
Center for Theoretical Physics Laboratory for Nuclear Science and Department of Physics Massachusetts Institute of Technology Cambridge, Massachusetts 02139 (MIT-CTP-2447 hep-lat/xxxxxxx June 1995)
MS = A 1 + latt
=A 1+
g0 (a)2 (γ ln a2 p2 + c) 4π 2
,
相关文档
最新文档