Storey_FDR_2010

合集下载

The effect of internal control deficiencies on the usefulness of earnings in executive compensation

The effect of internal control deficiencies on the usefulness of earnings in executive compensation

The effect of internal control de ficiencies on the usefulness of earnings in executive compensation ☆Kareen E.Brown a ,1,Jee-Hae Lim b ,⁎a School of Accounting and Finance,University of Waterloo,200University Ave W (HH 289D),Waterloo,ON,Canada N2L 3G1bSchool of Accounting and Finance,University of Waterloo,200University Ave W (HH 289G),Waterloo,ON,Canada N2L 3G1a b s t r a c ta r t i c l e i n f o Keywords:Sarbanes –Oxley ActInternal control material weaknesses (ICMW)Executive compensation EarningsSince SOX 404disclosures are informative about earnings,and due to the widespread practice of using earnings-based measures in executive compensation,this study examines whether reports of internal control material weaknesses (ICMW)under SOX 404in fluence firms'reliance on earnings in tying executive pay to ing 391(366)firm-year observations with reported ICMW and 3648(3138)firm-year obser-vations for CEOs (CFOs)reporting NOMW under SOX 404,we find a decreased strength in the association be-tween earnings and executives'(CEO and CFO)compensation when the firm reports an ICMW,and as the number of reported ICMW increases.In addition,we find this decreased weight on earnings for the more se-vere Company-Level than Account-Speci fic material weaknesses.Our study suggests that the ICMW report under SOX 404provides incremental information for executive compensation beyond that contained in reported earnings.©2012Elsevier Ltd.All rights reserved.1.IntroductionThe accounting scandals at firms such as Enron and WorldCom highlighted de ficiencies in corporate governance that were charac-terised by low financial reporting quality and disproportionate pay-for-performance.2To discipline firms and restore investor con fidence,legislative authorities enacted the Sarbanes –Oxley Act.Among the re-forms is Section 404of SOX (SOX 404)which requires both the manage-ment and the external auditor to report on the adequacy of a firm's internal control over financial reporting.Prior research shows that,relative to non-disclosing firms,firms reporting material weaknesses in internal control (ICMW)have inferior accruals and earnings quality (Ashbaugh-Skaife,Collins,Kinney,&LaFond,2008;Bedard,2006;Doyle,Ge,&McVay,2007b ),and lower earnings –returns coef ficients (Chan,Farrell,&Lee,2008).Given the fact that SOX 404disclosures are informative about earnings,and due to the widespread practice of using earnings-based measures in executive compensation,this studyexamines whether reports of ICMW under SOX404in fluence firms're-liance on earnings in tying executive pay to performance.A long line of research shows that earnings-based performance measures are commonly used to motivate and reward executives be-cause such measures correspond to manager actions (Gjesdal,1981).However,there are two drawbacks to using earnings to evaluate ex-ecutive performance.First,because executives know how their ac-tions impact earnings,they can manipulate this measure to increase their wealth.Second,earnings do not fully re flect the long-term im-plications of recent executive decisions.Based on these factors,firms place varying weights on earnings in compensating their exec-utives,and the weights are determined by how sensitive earnings are to effort and on the precision ,or lack of noise,with which it re-flects executives'actions (Banker &Datar,1989;Lambert &Larcker,1987).However,there is evidence that CEOs are shielded from certain negative events,such as firm restructuring (Dechow,Huson,&Sloan,1994),or above the line losses (Gaver &Gaver,1998).Our main focus of inquiry is signi ficant because it builds on this line of research by showing that the sensitivity of compensation –performance relations varies cross-sectionally with the quality of the system producing the earnings information.Weak internal controls potentially permit accounting errors to occur and go undetected,increasing unintentional errors in accrual estimation and/or facilitating intentional earnings management (Doyle et al.,2007b ).A report of an internal control de ficiency,there-fore,signals that the manager is unable to provide reasonable assur-ance regarding the quality of reported earnings (Ashbaugh-Skaife et al.,2008;Bedard,2006;Chan et al.,2008;Doyle et al.,2007b ).Using the sensitivity-precision framework,the errors introduced byAdvances in Accounting,incorporating Advances in International Accounting 28(2012)75–87☆Data availability:Data used in this paper are publicly available and also can be requested from the authors.⁎Corresponding author.Tel.:+15198884567x35702;fax:+15198887562.E-mail addresses:kebrown@uwaterloo.ca (K.E.Brown),jh2lim@uwaterloo.ca (J.-H.Lim).1Tel.:+15198884567x35776;fax:+15198887562.2For example Hall and Liebman (1998)report a 209%increase in CEO mean salary in large US firms from 1980to 1994,and Bebchuk and Grinstein (2005)document a 146%increase in CEO pay from 1993to 2003in S&P 500firms.Bebchuk and Fried (2004)ad-vance the theory that soaring executive pay is the result of managementpower.0882-6110/$–see front matter ©2012Elsevier Ltd.All rights reserved.doi:10.1016/j.adiac.2012.02.006Contents lists available at SciVerse ScienceDirectAdvances in Accounting,incorporating Advances inInternational Accountingj o u r na l h o me p a g e :ww w.e l s e v i e r.c o m /l oc a t e /a di a cweak internal controls are likely to result in earnings that capture ex-ecutives'effort with low precision,diminishing its use as an assess-ment tool for evaluating managers'performance.Motivated by the optimal contracting hypothesis,we posit thatfirms with ICMW report earnings with lower precision-sensitivity.The purpose of this study is to examine whetherfirms reporting ICMW place relatively less weight on earnings compared tofirms reporting no ICMW(NOMW).In other words,whether ICMW reports influence compensation contracts is an empirical issue because,al-though ICMWfirms report lower earnings–returns coefficients (Chan et al.,2008),prior research does not suggest any direct associ-ation between the valuation role of earnings and its usefulness in compensating executives(Bushman,Engel,&Smith,2006).Under the null hypothesis,the sensitivity of executive compensation to earnings is unaffected by internal control deficiency.Consistent with prior research,we also examine the weight placed on earnings forfirms reporting two different types of ICMW:Account-Specific and Company-Level weaknesses(Doyle,Ge,&McVay,2007a, b,Doyle et al.,2007b).Account-Specific(AS)material weaknesses arise from routinefirm operations and may be resolved by additional substantive auditing procedures.When AS material weaknesses are identified,executives or auditors can easily audit around them by per-forming additional substantive pany-Level(CL)mate-rial weaknesses,on the other hand,are less easily resolved by auditor involvement and result from lack of resources or inexperience in main-taining an effective control system.Due to the pervasiveness of CL ma-terial weaknesses,the scope of audit efforts needs to be frequently expanded to deal with these more serious concerns regarding the reli-ability offinancial statements(Moody's,2006;PCAOB2004).The extent to which auditors are able to mitigate the negative effect on earnings of these two types of weaknesses would suggest less noise/greater preci-sion in earnings from Account-Specific relative to Company-Level weaknesses.The impact of precision times sensitivity on the weight of earnings for Account-Specific pany-Level material weaknesses is therefore the second empirical question.Using391(366)firm-year observations with reported ICMW and 3648(3138)firm-year observations for CEOs(CFOs)reporting NOMW under Section404of SOX,wefind a decreased strength in the associa-tion between earnings and CEO compensation when thefirm reports an ICMW,and as the number of reported ICMW increases.Our results are also robust to controls for variousfirm characteristics that prior studies have found to influence the role of earnings in compensation contracts,including earnings quality proxies such as earnings persis-tence(Baber,Kang,&Kumar,1998)and corporate governance charac-teristics(Chhaochharia&Grinstein,2009).In addition,for CL material weaknesses,wefind evidence of a lower strength in the earnings–compensation relation for the CEOs.Wefind no such result with AS material weaknesses suggesting that only CL weaknesses affect the weight placed on earnings in compensating CEOs.This study makes two contributions.First,it contributes to existing literature by making an examination of the role of earnings as a per-formance measure in executive compensation contracts(Bushman et al.,2006;Sloan,1993;among others),and by examining how infor-mation on the quality of afirm's internal controls influences the earn-ings–compensation relation.We confirm that weak internal controls result in a diminished role for accounting measures in the CEO com-pensation relation,consistent with optimal contracting.Specifically, it is thefirms with CL weaknesses that reduce the weight on earnings in CEO cash compensation.Overall,ourfindings suggest that the in-formation in the ICMW report is incremental to,or more timely than,that provided by discretionary accruals or earnings persistence measures.Second,our study extends a growing body of literature on the rela-tion between executive compensation and ICMW in the post-SOX era. Carter,Lynch,and Zechman(2009)show that the implementation of SOX in2002led to a decrease in earnings management,and thatfirms responded by placing more weight on earnings in bonus contracts for CEOs and CFOs in the post-SOX period.Another study by Hoitash, Hoitash,and Johnstone(2009)suggests that the compensation of the CFO,who has primary responsibility for the quality of thefirm's internal controls,is penalized for reports of ICMW.Since prior evidence shows, and stresses the importance of,a performance-based compensation penalty for internal control quality as a non-financial performance mea-sure in the evaluation of executives,our study further investigates whether an ICMW impacts the weight of earnings in compensation con-tracts under the mandate of SOX404.We show thatfirms consider the strength of an earnings generation system and specifically choose to re-duce emphasis on earnings-based performance measures in determin-ing CEOs'cash compensation.The next section of this paper provides background information on the internal control disclosure practices required by the Sar-banes–Oxley Act,discusses the usefulness of earnings as a perfor-mance measure and further develops our hypotheses.The third section describes our sample and research design.The fourth section presents our descriptive statistics,results and sensitivity analyses. Thefifth section concludes the paper.2.Prior research and hypothesisSection404of SOX is one of the most visible and tangible changes tofirms'internal control systems in recent times[(Public Company Accounting Oversight Board(PCAOB)(PCAOB),2004)].3The pivotal requirement of Section404is that management assess the effective-ness of thefirm's internal controls overfinancial reporting and in-clude this information in thefirm's annualfinancial statements.This regulation increases scrutinization by thefirm's auditors because the manager assessments must then be separately attested to by the auditor.One of the benefits of the disclosures under Section404is that internal control information is now readily available and may be informative as a non-financial measure of executive performance (Hoitash et al.,2009).Numerous studies have examined the determinants and conse-quences of ICMW.Early studies document an association between ICMW andfirm characteristics,such as business complexity,organi-zational change,firm size,firm profitability and investment of re-sources in accounting controls(Ashbaugh-Skaife,Collins,&Kinney, 2007;Doyle et al.,2007a;Ge&McVay,2005).The implementation of SOX Section404has resulted in higher audit fees(Hoitash, Hoitash,&Bedard,2008;Raghunandan&Rama,2006),longer audit delays(Ettredge,Li,&Sun,2006)and improved audit committee quality(Krishnan,2005).Several studiesfind negative and significant cumulative abnormal returns(Beneish,Billings,&Hodder,2008;De Franco,Guan,&Lu,2005)and lower quality of earnings(Ashbaugh-Skaife et al.,2008;Chan et al.,2008)after SOX404disclosures.Closely related to our study is the literature that examines the as-sociation between earnings quality and ICMW.Chan et al.(2008) document a greater use of positive and absolute discretionary ac-cruals forfirms reporting ICMW than forfirms receiving a favourable report.Ashbaugh-Skaife et al.(2008)alsofind thatfirms reporting ICMW after the inception of SOX have lower quality accruals and sig-nificantly larger positive and negative abnormal accruals,relative to controlfirms.Both Ashbaugh-Skaife et al.(2008)and Bedard(2006)find evidence of improvements in earnings quality after the remedia-tion of ICMW under Section404,whereas Doyle et al.(2007b)claim lower-quality earnings under Section302,but not Section404.3SOX404sets separation implementation dates for“acceleratedfilers”(primarily largefirms),for“non-acceleratedfilers”(smallerfirms),and for foreignfirms.Specifi-cally,Section404rules required acceleratedfilers to comply beginning in2004,where-as compliance for non-acceleratedfilers and foreignfirms began in phases starting in 2006and ending in2009,at which time those two groups reach full compliance.76K.E.Brown,J.-H.Lim/Advances in Accounting,incorporating Advances in International Accounting28(2012)75–87A recent study suggests thatfirms place greater weight on earn-ings in determining incentive pay after the passage of SOX,and other concurrent reforms,because the more stringent reporting envi-ronment in the post-SOX period of2002results in less earnings man-agement(Carter et al.,2009;Hoitash et al.,2009).Carter et al.(2009) report an increase in the weight placed on earnings changes as a de-terminant of executive compensation,and a decrease in the propor-tion of compensation via salary after SOX,that is larger for CEOs and CFOs than it is for other executives.In addition,Hoitash et al. (2009)claim that changes in CFO total compensation,bonus compen-sation and equity compensation are negatively associated with dis-closures of ICMW,suggesting a performance-based compensation penalty for poor internal controls in the evaluation of CFOs.However, the empirical literature has not yet addressed whether disclosures of ICMW in the post-SOX era influence the importance of earnings in de-termining executive pay.3.The role of earnings in executive compensation contractsPrior research has identified accounting earnings and stock returns as the two implicitfirm performance indicators commonly used to determine executive compensation.Accounting earnings are useful for determining executive compensation because they shield managers from market-wide variations infirm value that are beyond executives'control(Sloan,1993).Stock returns are useful because they anticipate future cashflows and reflect the long-term economic consequences of managers'actions(Sloan,1993).As a result,stock returns capture those facets of executive effort that are missing in earnings but are associated with compensation(Clinch,1991; Lambert&Larcker,1987).The usefulness of thefirm performance measures,such as earnings or returns in executive contracts,is deter-mined by its precision and sensitivity(Banker&Datar,1989),and the optimal weight on a performance measure increases as the precision times sensitivity(or the signal-to-noise ratio)increases.Sensitivity re-fers to the responsiveness of the measure to actions taken by the manager,and precision reflects the noise or variance of the perfor-mance measure conditional on the manager's actions.Consistent with prior studies,we model executive compensation as a function of both accounting earnings and returns.Based on this model specification,the weight on earnings as a performance measure,there-fore,is a function of its precision and sensitivity,relative to stock returns,in providing information about the efforts of managers.4.The impact of ICMW on the earnings–compensation relationIn this study we argue that,for several reasons,an ICMW report indicates that reported earnings capture executives'effort with less precision and is less sensitive to managerial effort thanfirms not reporting an ICMW.First,managers are potentially more likely to use accruals to intentionally bias earnings if internal controls are weak.A more effective internal control system allows less managerial discretion in the accrual process(Ashbaugh-Skaife et al.,2008;Doyle et al.,2007b),and thus reduces the ability of the management to ma-nipulate accruals for the purpose of increasing their compensation.Second,weak internal controls potentially permit accounting er-rors to occur and go undetected(Doyle et al.,2007b).An ICMW can impair the sensitivity of the earnings measure for executive compen-sation because the earnings offirms with ICMW may reflect delayed or untimely information(Chan et al.,2008).Further,the noise regard-ing managers'performance in reported earnings due to deficient in-ternal controls are likely to be unpredictable and may not be reflected in the properties of previously reported earnings numbers. For these reasons,the precision times sensitivity of earnings with regards to the manager's actions forfirms reporting ICMW is pre-dicted to be lower than that offirms not reporting a material weak-ness.If ICMW reports provide information about the precision-sensitivity relation of earnings,then they have the potential to impact the use of earnings in designing executive compensation.It is possible that the ICMW report provides no new information to compensation committees or that the committees fully adjust for earnings characteristics in designing executive compensation con-tracts.In such case,we wouldfind no association between executive cash compensation and reported earnings.However,if effective inter-nal controls provide information on the sensitivity-precision of earn-ings,then an ICMW report has the potential to impact the strength of the relation between accounting earnings and executive compensa-tion.We predict that the relation between executive compensation and accounting earnings is lower forfirms that report ICMW.This leads to ourfirst hypothesis:H1.Firms that report ICMW have lower accounting earnings–executive compensation relation thanfirms with NOMW.5.The impact of ICMW type on theearnings–compensation relationDepending on the underlying cause of the ICMW,additional monitor-ing mechanisms or substantive testing can mitigate the negative effects of poor internal controls and impact the weight placed on earnings in exec-utive compensation contracts.Consistent with Doyle et al.(2007a,b),we categorize ICMW disclosures into two categories that may have different impacts on the earnings–compensation relation.First,Account-Specific (AS)material weaknesses arise from routinefirm operations and relate to controls over specific account balances,such as accounts receivable,in-ventory,and legal proceedings,or transaction-level processes.When AS material weaknesses are identified,executives or auditors can easily audit around them by performing additional substantive procedures.In contrast,Company-Level(CL)material weaknesses reflect is-sues beyond the direct control of the executives and relate to more macro-level controls such as the control environment,general per-sonnel training,organizational-level accountability,or the overallfi-nancial reporting processes.Due to the pervasiveness of CL material weaknesses,the scope of audit efforts needs to be frequently expand-ed to deal with these more serious concerns regarding the reliability of thefinancial statement(Moody's,2006;PCAOB,2004).We there-fore expect a more negative association between disclosures of Company-Level material weaknesses and the weights of earnings in the compensation contract relative to Account-Specific weaknesses. pany-Level ICMW have a stronger negative association with the accounting earnings–executive compensation relation than do Account-Specific ICMW.6.Sample selection and research design6.1.Sample selectionWe use several sources of data:(1)Audit Analytics,(2)Compu-stat,(3)CRSP,(4)ExecuComp,(5)firms'financial statements and (6)Lexis-Nexis Academic Universe.We start by collecting data from Section404disclosures of auditors'opinions on ICMW overfinancial reporting fromfirms'Form10-Kfilings from January2004to Decem-ber2006.4To ensure that the identified acceleratedfilers under SOX 404pertain to a material weakness in internal control,we follow up our initial search offirms that receive adverse opinions on their ICMW in the Audit Analytics database with a manual check through Lexis-Nexis.For our sample period,we identify9899observations 4According to PCAOB(Standard No.2),three types of internal control overfinancial reporting exist:(1)a control deficiency,(2)a significant deficiency,or(3)material weaknesses.Since publicfirms are only required to disclose material weaknesses in Section404,our main empirical test uses all Section404reports available on Audit An-alytics,which includes3864firm-year observations from2004to2006.77K.E.Brown,J.-H.Lim/Advances in Accounting,incorporating Advances in International Accounting28(2012)75–87with clean reports and1399observations that received adverse opin-ions on theirfinancial reporting with at least one type of internal con-trol problem as a material weakness.After controlling for duplicates or non-acceleratedfilers from2004to2006,we validate1336adverse reports and9865clean reports for a total of11,201observations.5 Our research design examines the change in executive compensa-tion in the year following the ICMW.For ICMW in the years 2004–2006,we require compensation data for2004–2007to compute the change in compensation.We eliminate513ICMW and5999 NOMW(532ICMW and6415NOMW)firm-year observations for the CEO(CFO)due to missing salary and bonus data in thefirms'proxy statements or in ExecuComp.Next,we collect stock return and account-ing information from CRSP and Compustat respectively,resulting in a loss of399ICMW and129NOMW(405ICMW and223NOMW)firm-year observations for the CEO(CFO).Finally,we excludefirm-year observations from utility andfinancialfirms because these companies operate in unique regulatory environments that are likely to influence executive compensation.Thefinal sample for this study consists of 391(366)firm-year observations with ICMW and3648(3138)firm-year observations with NOMW for the CEO(CFO).We summarize our sample selection process in panel A of Table1.Panel B of Table1summarizes the ICMW subsample by the type of weakness.Following the recommendations of the PCAOB's Standard No.2and Moody's(Doss&Jonas,2004;Doyle et al.,2007a,b),two types of material weaknesses can be classified based on different ob-jectives.Our study identifiesfirms as having either an Account-Specific(AS)or Company-Level(CL)material weakness.6Of the391 (366)firm-year observations of ICMW reported subsample,162 (140)were Company-Level and229(226)were Account-Specific weaknesses for the CEO(CFO)subsample.Panel C of Table1summarizes the industry distribution of the CEO sample of391firm-years with ICMW and the3648firm-years without some kind of material weakness based on their two-digit SIC codes.The391ICMWfirm-years cover six industry groups.Among them,the Ser-vices industry has the highest number offirms,followed by the Machin-ery,Construction and manufacturing,and the Wholesale and retail industries.The industry distribution for3648NOMWfirm-years has the highest number of observations in the Services industry followed closely by the Machinery and Construction and manufacturing industries.6.2.Research designTo test H1,that predicts a lower strength in the earnings–compensation relation when thefirm reports an ICMW under Section 404of SOX,we use the following model(1):ΔCashComp i;t¼β0þβ1ICMW i;t−1þβ2ΔROA i;tþβ3RET i;tþβ4ICMW i;t−1⁎ΔROA i;tþβ5ICMW i;t−1⁎RET i;tþβ6LOSS i;tþβ7LOSS i;t⁎ΔROA i;tþβ8LOSS i;t⁎RET i;tþβ9SIZE i;tþβ10MTB i;tþβ11CEOCHAIR i;tþβ12IndAC i;tþβ13IndBD i;tþβ14BoardMeetings i;tþβ15AcctExp i;tþYEARþINDþεtð1ÞConsistent with prior studies,we focus on cash compensation be-cause almost allfirms explicitly use accounting measures as a deter-minant of cash bonus(Murphy,2000;Core,Guay,&Larcker,2003; Huson,et al.2012).No such evidence exists for the association be-tween equity-based pay and earnings because stock option grants are offered not only to reward executives but also to introduce con-vexity in executive compensation contracts,for retention purposes and because of tax andfinancial reporting costs(Core et al.,2003).7 Similar to Hoitash et al.(2009),our dependent variable is the change in salary plus bonus pay from year t−1to year t deflated by the beginning of the year salary in year t(ΔCashComp).Our indepen-dent variables are(1)an internal controls deficiency indicator vari-able(ICMW or CL/AS)at time t−1,(2)the percentage change in return on assets(ΔROA),(3)stock returns(RET)and(4)interaction terms between the internal controls deficiency indicator and the per-centage change in both ROA and stock returns.We include annual stock returns for the year t in the model specification because a mean-ingful association between compensation andfirm performance must include returns(Murphy,1998;Sloan,1993).The parameter estimateβ4on ICMW∗ΔROA examines H1.Under the null hypothesis that the sensitivity of executive compensation to earnings is unaffected by internal control deficiency,the parameter,β4would be insignificant.However,we predict that the weight assigned to earnings will decrease with ICMW disclosures.As such, we expect the estimateβ4for the interaction ICMW∗ΔROA to be neg-ative and significant.It is possible that,with poor internal controls,5After controlling for97duplicates or non-acceleratedfilers from2004to2006,we validate1336adverse reports(relating to452firms in2004;487firms in2005;and 397firms in2006)and9865clean reports(2478firms in2004;3478firms in2005; and3909firms in2006)for a total of11,201observations.6We want to thank one of the authors(Doyle et al.,2007a,b)for the conceptual foundations of these two types of ICMW and the coding validations.Our classification of a Company-Level vs.an Account-Specific material weakness is mutually exclusive. For example,if afirm has both Company-Level weaknesses and Account-Specific ma-terial weaknesses,then we code thefirm as having a Company-Level weakness(Doyle et al.,2007a).In addition,three or more Account-Specific material weaknesses are cod-ed Company-level Weaknesses(Doyle et al.,2007b,p.1149).Table1Sample selection and descriptive statistics(January2004–December2007).Panel A:Sample selection and sample compositionCEO subsample CFO subsampleICMW NOMW10-Kfilings from Audit Analytics andsubsequent manual review2004–20061399989913999899 Less:Duplicate observations(63)(34)(63)(34) SOX404disclosure data2004–20061336986513369865 Missing compensation data2004–2007(513)(5999)(532)(6415) Missing Compustat&CRSP data2004–2007(399)(129)(405)(223) Less:Utilities&financialfirms2004–2007(33)(89)(33)(89) Testing H1&H239136483663138 Panel B:Samplefirms by types of ICMWCompany-Level(CL)Account-Specific(AS)CEO CFO CEO CFO 20046860108107 200560528886 200634283333 Panel C:Sample composition by industry(CEO sample)2-digitSICIndustry description ICMW NOMW#Obs.%#Obs.%10–13Mining11 2.8171 4.7 23–34Construction and manufacturing6516.677521.2 35–39Machinery9724.879721.8 42–49Transportation and utilities4411.349413.5 50–59Wholesale and retail4912.539210.7 72–87Services12532.0101927.97For these reasons any inferences that we draw about internal control weaknesses and the use of earnings from tests using equity based compensation would be incon-clusive.Additionally,the importance of earnings in determining executive cash based pay has increased in the post-Sox era,which is the period that we examine(Carter et al.,2009).78K.E.Brown,J.-H.Lim/Advances in Accounting,incorporating Advances in International Accounting28(2012)75–87。

Microsoft Dynamics AX 2009 Role Center KPI 追踪指南说明书

Microsoft Dynamics AX 2009 Role Center KPI 追踪指南说明书

Microsoft Dynamics® AXTracing Dynamics AX 2009 Role Center KPI’sSummary: This document explains how to trace the data displayed in a Role Center page KPI to its source in the Microsoft Dynamics AX online transaction processing (OLTP) database.Author:Catherine McDade, Support Escalation EngineerDate Published: January, 2010Table of ContentsIntroduction (3)Terminology (4)KPI walkthrough (9)Description of KPI example (9)Step-by-step (9)Appendix A: Useful links (31)IntroductionThe prominent place of key performance indicators (KPIs) in Microsoft Dynamics AX 2009 Role Center pages has prompted questions about where the KPI data is drawn from. This document explains how to trace the data displayed in a KPI to its source in the Microsoft Dynamics AX online transaction processing (OLTP) database.Microsoft Dynamics AX relies on SQL Server Analysis Services (SSAS) for its business intelligence processing. In the following sections, we define terms of importance for SSAS, and then provide an example of how to trace the data for a KPI on a Role Center page.TerminologyKey Performance Indicator (KPI)A Key Performance Indicator is a measurement for gauging business success, or, in other words, ameasure of business metrics against targets. For example, a Sales Amount KPI could show sales from the last quarter and display a green icon if you are at budget, yellow if you are within 5% of budget, and red if you are under 5% of budget.Online Analytical Processing (OLAP)OLAP systems (such as that supported by SSAS) aggregate and store data at various levels across various categories.FactsFacts are predominantly numeric measurements, such as price or quantity, and represent the key business metrics that you want to aggregate and analyze. Facts form the basis of calculations, and you often aggregate them for members of a dimension.DimensionsDimensions form the contexts for the facts, and define the aspects of a business by which the facts are aggregated. For example, Items could be a dimension, while Price and Quantity could be facts of that dimension.Data sourceA data source stores the connection information for an SSAS project and/or database. WithMicrosoft Dynamics AX, the project or OLAP database that you create has a data source that points to your Microsoft Dynamics AX OLTP database.Data source viewA data source view contains the logical model of the schema used by an SSAS database object.Data source views can filter, apply calculations, and create joins on objects in the data source. In the OLAP database that Microsoft Dynamics AX creates, most of the data source views are simply views of a specific table, though some views may include a SQL statement that contains filters, calculations, or joins.MeasuresA measure represents a column that contains quantifiable data, usually numeric, that can beaggregated. A measure is generally mapped to a column in a fact table. An example of a measure would be Sales Amount or Cost of Goods Sold (COGS).CubeCubes store summarized fact and dimension data in structures that are multidimensional (that is, containing more than the two dimensions found in spreadsheets and normal database tables).Dimensions define the structure of the cube, and measures provide the numeric values of interest to an end user.Microsoft Dynamics AX 2009 ships with the following 10 default cubes:∙Accounts Receivable∙Human Resources Management∙General Ledger∙Production∙Project Accounting∙Purchase∙Sales∙Customer Relationship Management∙Expense Management∙Accounts payableMultidimensional Expressions (MDX)MDX is a query language, analogous to Structured Query Language (SQL), that is used to retrieve multidimensional data from a cube.Business Intelligence Development Studio (BIDS)An integrated development environment (IDE) based on Microsoft Visual Studio 2005 or 2008 and used to create and modify business intelligence solutions. This is the tool that you use to view and/or modify your Dynamics AX OLAP project or database.ProjectIn BIDS, a project is a collection of objects that make up your OLAP database. BIDS stores the objects (cubes, dimensions, etc) as files in the file system. It is recommended that you create a project for your OLAP database so that when you are making changes you are not affecting the database until you deploy.Below is a screen shot of BIDS opened to a project, followed by two screen shots that label the various sections of the BIDS environment.Detail view of BIDS, left side:Detail view of BIDS, right side:KPI walkthroughDescription of KPI exampleThis section walks you through an example of how to determine the origin of KPI values. We will use screen shots as needed to illustrate procedures.Scenario: Your CEO views her Role Center and wants to know where the numbers for the Production Cost KPI are coming from.Step-by-step1.In the Microsoft Dynamics AX client, go to the User profiles form (Administration >Setup > User profiles). On the form find and select CEO in the Profile ID column andthen click the View Role Center button.2. For the Production KPIs click the Manage KPIs link.3. Click the edit button on Production cost (the pencil icon). It will tell you that this ispulling from the Production Cube and the Production Cost KPI.4. To look at the KPI open SQL Server Business Intelligence Development Studio (BIDS). Ifyou are running SQL Server 2005, BIDS can be found at Start > All Programs >Microsoft SQL Server 2005 > SQL Server Business Intelligence DevelopmentStudio. If you are running SQL Server 2008, BIDS can be found at Start > All Programs > Microsoft SQL Server 2008 > SQL Server Business Intelligence Development Studio.5. Open your OLAP database (File > Open > Analysis Services Database).6. On the Connect To Database form, select Connect to existing database. Enter thename of the SQL Server Analysis Services Server in the Server field. In the Database field, enter Dynamics AX.Note: By default your OLAP database is named Dynamics AX. If you have applied adifferent name, use that name for Dynamics AX in the step above.7. Open the Production cube in the Solution Explorer section of BIDS. Find ProductionCube, right-click, and select Open.8. Click the KPIs tab.9. In the KPI Organizer, click the Production Cost KPI to open its setup form.10. The Value Expression section tells you what data the KPI is displaying.For this KPI we see that it displays the following:[Measures].[Actual vs. Planned Consumption]“M easures” could be a calculated measure or a measure on the cube structure. It istypically a calculated measure, so click the Calculations tab for the Production cube. 11. On the Calculations tab, find the Script Organizer and click the Actual vs. PlannedConsumption calculation.12.You will see in the Expression section it is doing the following:IF(ISEMPTY([Measures].[Cost of Planned Consumption]) OR [Measures].[Cost of Planned Consumption] = 0,NULL,([Measures].[Cost of Actual Consumption] / [Measures].[Cost of PlannedConsumption]) * 100)13. If we break the above statement down we see that the first part is:[Measures].[Cost of Planned Consumption] OR [Measures].[Cost of Planned Consumption] = 0, NULLWhat this tells us is that if these values return zero we will report null, otherwise we will do the calculation on the next line. First we need to find out if the above statement would return a zero, as shown in the following steps.14.Begin with the first part of the MDX query, [Measures].[Cost of PlannedConsumption]. On the Calculations tab you should see that Cost of PlannedConsumption breaks down to the following:[Measures].[Planned Cost Amount] + [Measures].[Planned Cost Markup]15. The Planned Cost Amount is another calculated measure that does the following:([Measures].[Cost amount], [Production Level].[Level].&[1])16.Cost amount is not a calculated measure, so we go back to the Cube Structure tab andin the Measures pane find CostCalcuation > Cost amount.17. Right-click Cost amount and select Properties.18. Expand Source, then expand Source again. The TableID is PRODCALCTRANS and theColumnID is COSTAMOUNT.19. To verify where the data is pulled from, go to the Solution Explorer and right-clickDynamics AX under Data Source Views. Select Open.20. On the Dynamics AX data source view tab, find PRODCALCTRANS under Tables.21. Right-click PRODCALCTRANS and select Edit Named Query.22.If you didn’t have an Edit Named Query option, it would mean the data was being pulledthe PRODCALCTRANS table using the following select statement:select costamount from prodcalctransHowever, since this is a named query, we need to find where the COSTAMOUNT column is coming from. To do this, look through the column labeled Column, find COSTAMOUNT, and then look at the Table column to see the source table.23.We see that the COSTAMOUNT column is pulling data from the PRODCALCTRANS table.The SQL statement would be:select costamount from prodcalctrans24.Now we need to trace the second part of the calculated measure, which is [ProductionLevel].[Level].&[1]. Find Production level in the Hierarchies tab underDimensions. Expand Production level and then click Edit Production Level.25.The Production Level hierarchy should appear under Production Level. Right click Leveland select Properties.26.In the Properties window, expand NameColumn and then expand Source. The sourceTableID is PRODCALCTRANS_LEVEL and the ColumnID is COLLECTREFLEVEL.27.We now know the data source that OLAP is using, but we want to find out where data isbeing pulled from in the Microsoft Dynamics AX OLTP database. To do this we can open the Dynamics AX option under Data Source Views in Solution Explorer.28.Scroll to PRODCALCTRANS_LEVEL, right-click, and select Edit Named Query29.The SQL statement for this data source is:SELECT DISTINCT COLLECTREFLEVEL FROM PRODCALCTRANS30.We now have enough information to build a SQL statement that would reflect the OLAPquery we saw in step 24 ([Production Level].[Level].&[1]). Adding the level of “1”from the end of the statement yields the SQL statement:select * from prodcalctrans where collectreflevel=1bining SQL statements yields:select sum (costamount) from prodcalctrans where collectreflevel=132.Now return to the second part of the MDX query [Measures].[Planned Cost Markup]from step 14.On the Calculations tab, find Planned Cost Markup. We see that this calculated measure is defined by([Measures].[Cost Markup], [Production Level].[Level].&[1])33.On the Cube Structure tab, navigate to CostCalcuation and then to Cost Markup.34.Right-click and select Properties for Cost Markup.35.In the Properties window, expand Source and then Source again. The displayedTableID is PRODCALCTRANS and the ColumnID is COSTMARKUP.36.Return to the Data Source View and look at PRODCALCTRANS and the Cost Markupcolumn (as in steps 20 through 23). The SQL statement turns out to be:select costmarkup from prodcalctrans37.We already found the SQL for production level.level &1 in steps 24-30. Combiningthat with COSTMARKUP yields:select sum (costmarkup) from prodcalctrans where collectreflevel=138.Now you can take the sum of COSTAMOUNT and COSTMARKUP (using the results fromsteps 31 and 37) where collectionreflevel=1. If that value is zero, then the KPI is null.39.If the value is not zero, then we continue tracing the KPI using the second part of thestatement from step 12:[Measures].[Cost of Actual Consumption] / [Measures].[Cost of Planned Consumption]) * 10040.The Cost of Actual Consumption is a calculated measure that has the followingexpression:[Measures].[Realized Cost Amount] + [Measures].[Realized Cost Adjustment]41.In the first part of the statement, the Realized Cost Amount measure is:([Measures].[Actual cost amount], [Production Level].[Level].&[1])42.Now look up the Actual cost amount measure. To find this measure, go back to theCube Structure tab and find Actual cost amount under CostCalculation.43.If we look at the properties of Actual cost amount we find that the source TableID isPRODCALCTRANS and the ColumnID is REALCOSTAMOUNT.44.This gives us a SQL statement of:select realcostamount from prodcalctransIf we add the production level of 1 (which we already found in steps 24 to 30) the SQL statement for all of step 41 is:select sum (realcostamount) from prodcalctrans where collectreflevel=145.Go back to step 40 and look at the second part of the statement[Measures].[Realized Cost Adjustment]Realized Cost Adjustment is a calculated measure equivalent to:([Measures].[RealCostAdjustment], [Production Level].[Level].&[1])46.The source of RealCostAdjustment can be found by going back to the Cube Structuretab and finding RealCostAdjustment under CostCalculation.47.If we look at the properties of RealCostAdjustment, we find that the source TableID isPRODCALCTRANS and the ColumnID is REALCOSTADJUSTMENT. Adding the production level of 1, we would see a SQL statement such as:select sum(realcostadjustment) from prodcalctrans where collectreflevel=148.To derive Cost of Actual Consumption, we would add the results of steps 44 and 47.49.Next we trace the second part of the statement from step 39:[Measures].[Cost of Planned Consumption]This results in the expression:[Measures].[Planned Cost Amount] + [Measures].[Planned Cost Markup]50.[Measures].[Planned Cost Amount] is a calculated measure equivalent to:([Measures].[Cost amount], [Production Level].[Level].&[1])51.To find [Measures].[Cost amount], go back to the Cube Structure tab and find Costamount under CostCalculation.52.If we look at the properties of Cost amount, we find that the source TableID isPRODCALCTRANS and the ColumnID is COSTAMOUNT. Adding the production level of 1, we would see the SQL statement:select sum (costamount) from prodcalctrans where collectreflevel=153.Return to step 49 for the next part of the statement:[Measures].[Planned Cost Markup]This is a calculated measure equivalent to the expression:([Measures].[Cost Markup], [Production Level].[Level].&[1])54.To trace this measure, go back to the Cube Structure tab and find Cost Markup underCostCalculation.31 TRACING DYNAMICS AX 2009 ROLE CENTER KPI ’S55. If we look at the properties of Cost Markup , we find that the source TableID isPRODCALCTRANS and the ColumnID is COSTMARKUP. Adding the production level of 1,we see the SQL statement:select sum (costmarkup) from prodcalctrans where collectreflevel=156. The results for [Measures].[Cost of Planned Consumption] would be the sum of steps 52and 55.57. Therefore, the results for [Measures].[Cost of Actual Consumption] / [Measures].[Cost ofPlanned Consumption]) * 100 This means we have to divide the results from step 48 bythe results of step 56. Then we take that number and multiply it by 100 to get our actualvs. planned consumption which is what gives us the Production Cost KPI.After following these steps you should now know where the data that makes up the Production Cost figure on the KPI is coming from. You should also have learned how to trace a KPI so that you can determine where any KPI is pulling data from.Appendix A: Useful links∙ Microsoft Dynamics AX 2009 Business Intelligence Cube Reference Guide: /downloads/details.aspx?FamilyId=6A685DF3-912D-4545-B990-CD2283C159FB&displaylang=en∙ Role Center reference for Microsoft Dynamics AX: https:///dynamics/ax/using/ax_rolecenterreference.mspxReferences:∙The information above was taken from SQL Server books online. For more information on SSAS please go to /en-us/library/ms130214.aspx32 TRACING DYNAMICS AX 2009 ROLE CENTER KPI ’SThis document is provided “a s-is.” Information and views expressed in this document, including URL and other Internet Web site references, may change without notice. You bear the risk of using it.Some examples depicted herein are provided for illustration only and are fictitious. No real association or connection is intended or should be inferred.This document does not provide you with any legal rights to any intellectual property in any Microsoft product. You may copy and use this document for your internal, reference purposes. You may modify this document for your internal, reference purposes. © 2009 Microsoft Corporation. All rights reserved.Microsoft Dynamics is a line of integrated, adaptable business management solutions that enables you and your people to make business decisions with greater confidence. Microsoft Dynamics works like and with familiar Microsoft software, automating and streamlining financial, customer relationship and supply chain processes in a way that helps you drive business success.U.S. and Canada Toll Free 1-888-477-7989Worldwide +1-701-281-6500/dynamics。

economic feasibility for the recycling of construction waste

economic feasibility for the recycling of construction waste

Resources,Conservation and Recycling 54 (2010) 377–389Contents lists available at ScienceDirectResources,Conservation andRecyclingj o u r n a l h o m e p a g e :w w w.e l s e v i e r.c o m /l o c a t e /r e s c o n r ecEvaluation of the economic feasibility for the recycling of construction and demolition waste in China—The case of ChongqingW.Zhao a ,R.B.Leeftink b ,V.S.Rotter c ,∗aInstitute of Construction Management and Real Estate,Chongqing University,Shabei Street 83,400045,Chongqing,PR China bDecistor bv,Bilderdijkstraat 4,2013,EJ Haarlem,Netherlands cDepartment of Waste Management,Institute of Environmental Engineering,TU Berlin,Sekr.Z 2,Strasse des 17,Juni 135,10623,Berlin,Germanya r t i c l e i n f o Article history:Received 15January 2009Received in revised form 3September 2009Accepted 6September 2009Keywords:Construction and demolition waste recyclingEconomic feasibility Model Chinaa b s t r a c tIn the recycling chain of construction and demolition waste,it is impossible to guarantee a certain quality of recycled products and to recycle a large amount of materials in recycling centers without mechanical sorting facilities.This counts even more when the produced materials have a low economic value,as is the case with crushed and cleaned debris,also called aggregates.In order to assess if recycling can be done effectively,a feasibility study of the recycling of construction and demolition (C&D)waste is necessary.In the paper,the economic feasibility of recycling facilities for C&D waste in China’s Chongqing city was assessed.Investigations on the current situation of C&D waste recycling in Chongqing showed that there were a large quantity of waste and an enormous demand for recycled materials due to the busy ongoing construction activities,which generated a large market potential and also brought a challenge to the strengthening of the recycling sector.However,a full cost calculation and an investment analysis showed that,under current market conditions,operating C&D waste recycling centers in Chongqing might face high investment risks.Thus,the regulations and economic instruments like tax that can support the economic feasibility of recycling are discussed,and the recommendations for the choice of instruments are provided.© 2009 Elsevier B.V. All rights reserved.1.IntroductionConstruction and demolition (C&D)waste is one of the largest waste flows in the world.In China,urban C&D waste has reached 30–40%of the total urban waste generation because of the large-scale construction and demolition activities resulting from the accelerated urbanization and city rebuilding (Chui and Yang,2006).Typical emissions from landfilling C&D waste are chemicals leach-ing from wood,drywall and concrete, e.g.,chromated copper arsenate (CCA)-treated wood containing a lot of hazardous sub-stances such as chromium and lead is contributing to toxic impact on ground and surface water and soil (Symonds,1999).Further-more,hydrogen sulfide (H 2S)as a major odorous component from C&D landfills contributes to acidification (Reinhart et al.,2004).Additionally,non-recycled waste will result in the loss of construc-tion materials and the occupation of landfill space for final disposal.From a broad point of view,disposal of C&D waste is not only a simple environmental concern,but also has major influence on the conservation of resources for the whole society,since it avoids exca-vation of raw materials and provides substitution for materials like∗Corresponding author.Tel.:+4903031422619;fax:+4903031421720.E-mail address:vera.rotter@tu-berlin.de (V.S.Rotter).cement and plastics which requires a significant amount of raw material,energy and funding to produce.In China,the recycling lacks a central,stable and flexible inter-mediate between waste generation and landfill like recycling centers to transform the waste into recycled secondary construc-tion materials.Unorganised collection and subsequent sorting out by waste pickers is an obstacle to plant operators of construction materials.One key concern of an operator in a recycling center is to assure the secure quantitative and qualitative supply of recyclable waste materials.The objective of this paper is to evaluate the economic feasibility for the recycling of C&D waste.By further simplifying the structural model of a feasibility study of a complex project,the essential steps in pre-feasibility study of C&D waste recycling are identified with reference to Kohler (1997).a.Estimated generation of C&D waste.b.Market analysis of recycled materials.c.Estimated costs of recycling facilities.d.Analysis of investments (including payback period,internal rate of return and breakeven point).As a case study,these steps were applied to the situation in Chongqing.The C&D waste generation was estimated,the waste0921-3449/$–see front matter © 2009 Elsevier B.V. All rights reserved.doi:10.1016/j.resconrec.2009.09.003378W.Zhao et al./Resources,Conservation and Recycling54 (2010) 377–389composition was qualified and the potential demand in the mar-ket of main recycled materials(aggregates,brick,wood and metal) was analyzed.The economic feasibility of different recycling facil-ities was evaluated by cost estimation and investment analysis. Meanwhile,the estimated cost and revenue of a recycling center in Chongqing were compared with that in the Netherlands tofind out successful factors of recycling centers,since it had been proved by our interviews with managers that C&D recycling centers in the Netherlands could be profitable over the last25years of operation. Eventually,a set of recommendations were concluded for policy makers through addressing the related regulations and economic instruments like tax contributing to the success of such a recycling facility as well as recycling market strategy.2.Background2.1.Definition of C&D wasteIn general,although there is no uniform definition of C&D in the world,the waste is mainly classified based on the origin and the composition of C&D waste.In the United States,construction and demolition waste is a waste material that is“produced in the process of construction,ren-ovation,or demolition of structures.Structures include buildings of all types(both residential and non-residential)as well as roads and ponents of C&D debris typically include concrete, asphalt,wood,metals,gypsum wallboard,and roofing”(Franklin, 1998).In China,a National Guideline,Regulations for Construc-tion Waste Management in Cities defines C&D waste as:“The soil,material and others are discarded and generated by any kinds of construction activities,including the development,reha-bilitation,refurbishment of construction projects”(Ministry of Communications of the P.R.China,2005).These two definitions are both based on construction activities. However,the former is more complete and clearer than the latter, since structure and waste composition are described in the former.2.2.C&D waste composition and generationDolan et al.(1999)identified as factors influencing the amount of C&D waste produced:•The extent of growth and overall economic development that drives the level of construction,renovation,and demolition.•Periodic special projects,such as urban renewal,road construc-tion and bridge repair,and unplanned events,such as natural disasters.•Availability and cost of hauling and disposal options.•Local,State and Federal regulations concerning separation,reuse, and recycling of C&D waste.•Availability of recycling facilities and the extent of end-use mar-kets.C&D waste contains broken concrete(foundations,slabs, columns,floors,etc.),bricks,mortar,wood,metal and roofing materials(windproof,waterproof and insulating materials)as well as packaging materials(paper,cardboard,plasticfilm and other materials as buffer like wood and foam plastics).C&D wastes are categorized in a variety of ways,and different composition and characteristics of waste are described based on each category.There are three main factors that affect the characteristics of C&D waste (ICF,1995):•Structure type(e.g.,residential,commercial,or industrial build-ing,road,bridge).•Structure size(e.g.,low-rise,high-rise).•Activity being performed(e.g.,construction,renovation,repair, demolition).The direct methods for the determination of waste genera-tion and its composition is really weighing,measuring,sorting of the totalflow of waste or based on sampling and extrapolation (Brunner and Ernst,1986).Indirect analysis uses primary statistics on consumption,trade or other economic indicators or a descrip-tion of the stocks in order to estimate the generation rate.This approach was applied by Hsiao et al.(2002).2.3.Potential demand for recycled materialsThe demand for recycled material is determined by quality and price.The following application for secondary construction mate-rial can be identified.2.3.1.AggregatesNatural aggregates(NA),containing sand,gravel,crushed stone and quarried rock are used to prepare the foundation material for construction purposes(Poon et al.,2006).In China,there are exist-ing national standards forfine aggregates(GB/T14684,National Standard of Sand Utilization,General Administration of Quality Supervision,Inspection and Quarantine of the P.R.China,2001a) and coarse aggregates(GB/T14685,National Standard of Crushed Stone,General Administration of Quality Supervision,Inspection and Quarantine of the P.R.China,2001b).Nominal sizes of two kinds of aggregates ranging from less than2.36to37.5mm,comply with the grading of GB/T14684and GB/T14685,respectively.Tam and Tam(2006)described the following application for recycled aggregates(RA)sourced from the C&D waste recycling facility:foundation material for road construction,hardcore for foundation works,base/fill for drainage,aggregate for concrete manufacturing and general bulkfill.Recycled concrete aggregates (RCA)differ from natural aggregates,due to the fact that impurities like the cement-stone are still attached to the surface of the original natural aggregates even after the process of recycling.These highly porous cement-stone and other impurities contribute to a lower particle density and higher porosity,variation in the quality of the RCA and the higher water absorption(Paranavithana and Abbas, 2006).In China,RCA is simply shredded by a crusher and used as an additive for producing concrete by mixing with cement and water. Considering the impact of RCA on quality of concrete based on par-ticle density,porosity and absorption,Chen(2005),Poon and Chan (2007),Wu(2004),Zhang and Qi(2004)indicated that the substi-tute rate of NA is equal to or less than20%and RCA could only be applied in concrete equal to or less than C30(“C”is an abbreviation of concrete,30means the compressive strength on a concrete cube is30MPa)with reference to GB/T14684and GB/T14685.Accord-ing to GB50010(code for design of concrete structures,Ministry of Construction and Administration for Quality Supervision and Inspection and Quarantine of the P.R.China,2002),concrete equal to or more than C30could be applied in prestressed reinforced con-crete ly,applications of concrete equal to or less than C30are mainly limited to low-rise buildings and some pub-lic concrete structures such as park place.The application of RCA in Shanghai ecological building in China is mainly used in foun-dations and walls(Li,2009).In order to be more conservative,it is assumed that the RCA content is5%in concrete equal to or less than C30,same with the substitute rate of NA for reinforced con-crete work in Kuwait(Kartam et al.,2004).Report of China Cement Association showed that about78%of total concrete production is C30and less than C30concrete in China(Anhui DevelopmentW.Zhao et al./Resources,Conservation and Recycling54 (2010) 377–389379Table1The potential applications of recycled wood depend on their quality.Approach Quality A wood a B wood b C wood cUse for erosion control and groundcoverOrganic soil amendment(in agriculture)after compostingChipboard productionWood chips for animal beddingFertilizer amendment in composingEnergy recovery by means of incineration=feasible.a A-quality wood—clear wood.b B-quality wood—slightly contaminated,e.g.,with paints,glues and coatings.c C-quality wood—hazardous wood waste contaminated with heavy metals,fire retardants and wood preservatives.and Reform Commission,2007).The composition of concrete with cement:water:aggregate is1.2:0.8:8and60%of countrywide out-put of cement is used to make concrete(Shi and Xu,2006).RA and RCA are better as road foundation/basement than vir-gin material,for the strength caused by residual non-hydrated cement and the wide particle size distribution.To meet the strength requirement of base and foundation of roads,according to thefirst application of RAC in pavement in Shanghai,a RCA content of50% was used in order to be more conservative(Xiao,Wang,Sun,&Li, 2005).Meanwhile,RA is only used in II and below II class high-way(for II,III and IV class highway,design speeds are limited to below80,40and20km/h,respectively)with reference to JTG B01(National Standard of Road Grate,Ministry of Communications of the P.R.China,2003).In addition,blast furnacefly ash is only allowed up to a certain(low)percentage in aggregates to enforce the road foundation,because of the high content of heavy metals. Coal slag is to a certain extend used in Western Europe for the same in road foundation(Feuerborn,2005).2.3.2.BrickPresently,brick is the largest component of C&D waste due to the traditional building habits and old-line production technology. However,converting building fashion from traditional brick struc-ture to reinforced concrete or steel structure,results in a decrease of the brick demand.At the same time,a ban on using clay as a raw material of brick had been implemented from2003by China Coun-cil for the Promotion of International Trade,since excavation of clay damages agriculture land.The substitutable bricks supported by patented technologies on bricking and cavity block made from inert materials(concrete,brick,mortar)(State Intellectual Prop-erty Office of P.R.China),are gradually being applied in building industry.Broken bricks are mixed with adhesive and cement to produce blocks.Considering environmental protection and qual-ity of bricking from mixed C&D waste,it is assumed that bricking and cavity block are mainly used for non-residential structures and non-buildings(enclosure,ground tile and greening).Using broken Table2Unit operations for C&D waste treatment.Technique FunctionManual separation Separate recoverable materials,disturbingmaterialsCrushers Size reductionWind-sifting Separate light and heavy materials in solid wasteby means of density separationScreening Make a size separationShaking-table Separate light from heavy materials in solid waste Magnetic separation Remove ferrous metals from non-magneticmaterialsEddy current separation Recovery of non-ferrous metalsFlotation Use the buoyancy produced by the attached tinyair bubbles to the dispersed hydrophobic particlesand lift them to the surface of the chamber brick as substitute for raw material for brick production must be the third main recycling approach of crushed brick next to road foundation and constructionfill.2.3.3.WoodAlthough wood is not increasing in the waste stream,new technologies of recycling wood are being explored.For instance, Logistics Engineering College successfully developed“artificial wood bricks”(mixture of woodflour and cement)as pre-embedding device of electricity,water,gas supply(Lu,2006). Considering wood quality in terms of specific contaminants,there are several potential and normal applications for recycled wood in developed countries,such as those listed in Table1(Kartam et al., 2004;Rijpkema,1999).In China,clean and de-nailed timber and boards are efficiently recycled and reused by contractors to avoid disposal fee and extra purchase cost of construction materials(Chen,2005).Uncontam-inated wood is used for chipboard production and furniture at present.Moreover,Finite market like animal bedding is an obstacle to decrease further unit cost by enlarging output,according to the economy of scale.Waste wood can be used to increase the calorific value of municipal solid waste(MSW)for the waste incineration, which has under the high proportion of organic waste in China a calorific value below the self burning temperature resulting in the substitution of additional oil and coal for the combustion process. However,for painted and waterproof wood,the market potential is limited due to the limited number of waste incinerators with sufficientflue gas cleaning,in China(Ji,2003).2.3.4.MetalsThe market for recycled metals such as steel or aluminum is fast growing because of their high economical value.Steel con-sumption of construction industry has increased from14.56Mt (1991)to78.1Mt(2001)in China(You,2005).Increasing prices of metals encourage contractors to separate as much as possible reinforcement bars from crushed concrete on construction sites. Furthermore,metal industry also purchases deformed reinforce-ment bars from individuals scavenging at dumpsites.2.4.Technologies for recycling centersAt present,recycling technologies of C&D waste mainly come from mining industry based on mechanical mon separation techniques are described in Table2(Xing and Charles, 2006).The technology used at recycling plants is determined by the scale of investment,quality requirement for recycled materials, cost and revenue of recycled production.A recycling plant usually consists of crushers,screeners,magnetic separators,wind-sifting and manual separation as demonstrated in Fig.1.To attain higher quality of recycled production,recycling plant contains a second separation or more by means of combination of these technologies shown in Table2.380W.Zhao et al./Resources,Conservation and Recycling54 (2010) 377–389Fig.1.Flowsheet of a C&D waste recycling plant in the Netherlands (Jong and Kuilman,2008).2.5.Cost estimation and investment analysis of recycling facilities Dolan et al.(1999)and Duran et al.(2006)proposed a structural model of economic feasibility of recycling facilities and correspond-ing assumptions.Results of economic feasibility analyses for C&D waste recycling facilities in the Taiwan,USA,Hong Kong and India were quoted in some studies,but few details about the method-ology used and data collected were presented (Huang et al.,2002;MACREDO,2006;Tam and Tam,2006;TIFAC,2006).Duran et al.(2006)and Nunes,Mahler,Valle,and Neves (2007)discussed eco-nomic feasibility of future recycling facilities in Ireland and Brazil based on different scenarios such as capacity.Symonds (1999)showed detailed calculations and assumptions used to estimate the cost of C&D waste recycling center in the USA.Klang et al.(2003)although considered environmental and social aspects besides eco-nomic aspect to evaluate recycling feasibility in Sweden,data in economic aspect were not comprehensive.Additionally,there has been relatively little research about economic feasibility of C&D waste recycling in China.Cost estimation models are mathematical algorithms or para-metric equations used to estimate the costs of a product or project (Dean,1995).The results are typically necessary to obtain approval to proceed,and are factored into business plans,budgets,and other financial planning and tracking mechanisms.The total costs of a product or project are divided into fixed costs (including costs for maintenance,depreciation,insurance and financing)and variable costs (including costs for labour,energy,transportation,etc.).The definition of investment analysis is that a study of the likely return from a proposed investment with the objective of evaluating the amount an investor may pay for it,the investment’s suitabil-ity to that investor,or the feasibility of a proposed real estate development.There are various methods of investment analysis,including “cash on cash return”,“payback period”(PP),“internal rate of return”(IRR),and “net present value”(NPV).Each method provides some measures of the estimated return on an invest-ment based on various assumptions and investment horizons.For investors,there are two necessary preconditions of the investment:•Life time of main equipments exceeding PP (time required to recover an investment)of investment on disposal facilities.•Estimated profit from IRR (the discount rate for which the total present value of future cash flows equals the cost of the invest-ment)exceeding opportunity cost of capital investment with reference to interest of saving.To achieve economic feasibility of recycling,unit recycling cost (Rc )and acceptable unit profit (P )must be covered by main rev-enue from the gate fee to recycling per ton (Gf )and the revenue ofrecycled materials per ton (RCp ),as the following Eq.(1).Rc +P ≤Gf +RCp (1)3.MethodologyFor this study,the data collection is comprised of three activities.•The data about generation and composition of waste in Chongqing were collected from 66construction and demo-lition sites through questionnaires sent to contractors and visiting some of the municipalities and contractors and data from the Chongqing Statistical Yearbook (Statistical Yearbook is a large annual statistical publication compiled by Chongqing Municipal Bureau of Statistics ,which covers comprehensive data on Chongqing’s social and economic development per year.).The data about potential demand for C&D waste came from various literatures with priority being given to recent literature con-taining extensive bibliographies and the Chongqing Statistical Yearbook.•Data concerning the fixed costs,especially equipment costs and corresponding certifications of recycling facilities,were obtained from various literature references and interviews with managers in recycling centers in the Netherlands as reference,since there are no such C&D waste recycling centers in China at this moment and thus no reliable cost data available.•Data of operating costs like unit labour cost were identified with reference to corresponding prices in 2003by Chongqing Price Information Center (an authorized website for various price searches).Most of the data were collected between May and December 2007.3.1.Scope of the studyChongqing is a commercial and industrial center with average increase rate of the GDP of around 10%per year from 1978to 2006.Average growth rate of investment in construction activ-ities is about 25.4%from 1997to 2006.Chongqing municipality was established on March 14,1997.Covering an urban area of 631.35km 2,there were a total of 15districts and 13.11million urban inhabitants in Chongqing in 2006(Chongqing Municipal Bureau of Statistics,2007).Generally,the aspect of recycling C&D waste mainly depends on the following factors of a district:the scarce of natural aggregates resource,the industrialization level and the population density (Li,2008).Table 3shows that the higher population density in Chongqing and the Netherlands compared with China indicates boom of potential residential buildings demand in Chongqing and the Netherlands.The comparison of the ratio of industrial addedW.Zhao et al./Resources,Conservation and Recycling 54 (2010) 377–389381Table 3Similarities and differences between the Netherlands and Chongqing in regional factors influencing recycling C&D waste in 2006.FactorsChinaChongqingNetherlandsPopulation density (people/km 2)256a 390a 483b Natural aggregates resources•Sand•Sand•Sand•Gravel:sea and riverbeds •Gravel:riverbeds•Gravel:northsea and riverbeds •Rock/mining/mountains •Rock/mining/mountains •No rock/mining/mountains Industrialisation level (%)43c26.8d16.79bCurrent situation of recycling•Mixed collection e •Mixed collection f •Separation at source g•Manual separation•Manual separation•A healthy market for recycled products•High scrap value materials like steel recycled•High scrap value materials like steel recycled•Simple landfill (dumping)•Simple landfill (dumping)•Financial incentives,such as landfill tax introduced by the Environmental Taxes Act•Encouraging utilization of energy-saving,recycling and environmental technologies in construction materials•Quantitative limitation and quality control of quarry•The ban on landfilla Data source :Population density of the China,Chongqing and the Netherlands from National Statistic Agency of China (2007b).b Data source :Population density of the China,Chongqing and the Netherlands from Statistics Netherlands (2007).c Data source :Building industry level of the China,Chongqing and the Netherlands from National Statistic Agency of China (2007a).d Data source :Building industry level of the China,Chongqing and the Netherlands from Chongqing Municipal Bureau of Statistics (2007).e Data source :Current situation of recycling of the China from Zhao and Rotter (2008)and China Academy of Building Research (2005).f Data source :Current situation of recycling of the Chongqing from investigation on construction sites and Chongqing Construction Commission (2002).gData source :Current situation of recycling of the Netherlands from Ministry of Housing,Spatial Planning and Environment (2001).value to regional GDP,i.e.the regional industrialisation level (Lin et al.,2009),implies that although there is no greatpotential demand for non-residential buildings compared with nationwide,this demand is higher than the demand of the mature recycling market in the Netherlands.For the scarce of natural aggregates resource,although there are abundant natural aggregates resources in Chongqing and China compared with no rock in the Nether-lands,national and regional regulations have began to encourage recycling and limit amount of quarry.Considering the above three factors,it can be concluded that for Chongqing,a promising prospect for recycling of C&D waste can be achieved (Fig.2).C&D waste generation and potential demand of recycled mate-rials (aggregates,brick,wood and metal)were calculated based on the data collected in Chongqing.To evaluate the economic fea-sibility of recycling centers based on different scenarios (plant type,equipment and land),cost estimation (including fixed costs and variable costs)and investment analysis (IRR,PP and BP)Fig.2.Location of Chongqing city in China.were applied.Obstacles of economic feasibility were discussed by comparison of costs and revenues between Chongqing and the Netherlands.3.2.C&D waste composition and generation rateThere are no data available about the average composition and quantity of C&D waste in Chongqing,since the construc-tion companies until now are not obliged to record and report the qualitative and quantitative characteristics of the waste they generate.In the first approach,rough assumptions were attained through indirect analysis.The C&D waste composition was attained by calculating the arithmetic mean of data from 38construc-tion sites and 28demolition sites.All the sites covered different structures (brick–concrete structure means brick wall have the function of bearing load of residential building (Li and Wang,2005);as frame structure reinforced concrete or steel are used as load-bearing beams for high-rise residential and commercial buildings;wall structure is a frame structure with shear wall for anti-earthquake performance of a high-rise building)with a floor area of 1.42Mm 2(construction actives of 1.24Mm 2and demolition actives of 0.18Mm 2)and the sites are located in 7districts.Generation rate is estimated by project managers of construc-tion and demolition sites based on an estimation of the waste production activities by means of calculating the number of trucks to landfill.The basis to estimate generation of construction waste (Wc )and demolition waste (Wd ),respectively,is the specific gen-eration rate per activity as expressed in Eq.(2):Wc =Fb ×Dc Wd =Fd ×Dd(2)where Fb means the floor area of building from Chongqing Sta-tistical Yearbook;Dc means the generation rate of construction waste;Fd means the floor area of demolition from interviews with governors;Dd means the generation rate of demolition waste.3.3.Potential demand of recycled materialsConsidering competition of industry solid waste (coal slag and fly ash of coal-fired power plants)as road base and foundation,their amount should be deducted from the potential demand of recycled。

careless R 包用户指南说明书

careless R 包用户指南说明书

Package‘careless’October1,2023Type PackageTitle Procedures for Computing Indices of Careless RespondingVersion1.2.2Date2023-09-30Maintainer Richard Yentes<*****************>Description When taking online surveys,participants sometimes respond to items without regard to their content.These types of responses,referred to ascareless or insufficient effort responding,constitute significant problemsfor data quality,leading to distortions in data analysis and hypothesistesting,such as spurious correlations.The'R'package'careless'providessolutions designed to detect such careless/insufficient effort responsesby allowing easy calculation of indices proposed in the literature.Itcurrently supports the calculation of longstring,even-odd consistency,psychometric synonyms/antonyms,Mahalanobis distance,and intra-individualresponse variability(also termed inter-item standard deviation).For a review of these methods,see Curran(2016)<doi:10.1016/j.jesp.2015.07.006>. License MIT+file LICENSEURL https:///ryentes/careless/BugReports https:///ryentes/careless/issuesImports psychSuggests testthat(>=3.0.0),knitr,rmarkdownEncoding UTF-8LazyData trueVignetteBuilder knitrRoxygenNote7.2.3Config/testthat/edition3NeedsCompilation noAuthor Richard Yentes[cre,aut](<https:///0000-0002-6767-8065>), Francisco Wilhelm[aut]Repository CRANDate/Publication2023-10-0105:20:02UTC12careless R topics documented:careless (2)careless_dataset (3)careless_dataset2 (4)evenodd (4)irv (5)longstring (6)mahad (7)psychant (8)psychsyn (8)psychsyn_critval (10)Index11 careless careless:A package providing procedures for computing indices ofcareless respondingDescriptionCareless or insufficient effort responding in surveys,i.e.responding to items without regard to their content,is a common occurence in surveys.These types of responses constitute significant prob-lems for data quality leading to distortions in data analysis and hypothesis testing,such as spurious correlations.The R package careless provides solutions designed to detect such careless/insuffi-cient effort responses by allowing easy calculation of indices proposed in the literature.It currently supports the calculation of Longstring,Even-Odd Consistency,Psychometric Synonyms/Antonyms, Mahalanobis Distance,and Intra-individual Response Variability(also termed Inter-item Standard Deviation).Statistical outlier function•mahad computes Mahalanobis Distance,which gives the distance of a data point relative to the center of a multivariate distribution.Consistency indices•evenodd computes the Even-Odd Consistency Index.It divides unidimensional scales using an even-odd split;two scores,one for the even and one for the odd subscale,are then computed as the average response across subscale items.Finally,a within-person correlation is computed based on the two sets of subscale scores for each scale.•psychsyn computes the Psychometric Synonyms Index,or,alternatively,the Psychometric Antonyms Index.Psychometrical synonyms are item pairs which are correlated highly posi-tively,whereas psychometric antonyms are item pairs which are correlated highly negatively.A within-person correlation is then computed based on these item pairs.•psychant is a convenience wrapper for psychsyn that computes psychological antonyms.•psychsyn_critval is a helper designed to set an adequate critical value(i.e.magnitude of correlation)for the psychometric synonyms/antonyms index.careless_dataset3 Response pattern functions•longstring computes the longest(and optionally,average)length of consecutive identical responses given.•irv computes the Intra-individual Response Variability(IRV),the"standard deviation of re-sponses across a set of consecutive item responses for an individual"(Dunn et al.2018) Datasets•careless_dataset,a simulated dataset with200observations and10subscales of5items each.•careless_dataset2,a simulated dataset with1000observations and10subscales of10items each.The sample datasets differ in the types of careless responding simulated.Author(s)Richard Yentes<*****************>,Francisco Wilhelm<**************************> careless_dataset Simulated dataset with insufficient effort responses.DescriptionA simulated dataset mimicking insufficient effort responding.Contains three types of responses:(a)Normal responses with answers centering around a trait/attitude value(80percent probabilityper simulated observation),(b)Straightlining responses(10percent probability per simulated ob-servation),(c)Random responses(10percent probability per simulated observation).Simulated are 10subscales of5items each(=50variables).Usagecareless_datasetFormatA data frame with200observations(rows)and50variables(columns).4evenodd careless_dataset2Simulated dataset with careless responses.DescriptionA simulated dataset mimicking insufficient effort responding.Contains three types of responses:(a)Normal responses with answers mimicking a diligent respondent(b)Some number of longstringcareless responders,(c)some number of generally careless responders.Simulated are10subscales of10items each(=100variables).Usagecareless_dataset2FormatA data frame with1000observations(rows)and100variables(columns).evenodd Calculates the even-odd consistency scoreDescriptionTakes a matrix of item responses and a vector of integers representing the length each factor.The even-odd consistency score is then computed as the within-person correlation between the even and odd subscales over all the factors.Usageevenodd(x,factors,diag=FALSE)Argumentsx a matrix of data(e.g.survey responses)factors a vector of integers specifying the length of each factor in the datasetdiag optionally returns a column with the number of available(i.e.,non-missing) even/odd pairs per eful for datasets with many missing values. Author(s)Richard Yentes<*****************>,Francisco Wilhelm<**************************> ReferencesJohnson,J.A.(2005).Ascertaining the validity of individual protocols from web-based personality inventories.Journal of Research in Personality,39,103-129.doi:10.1016/j.jrp.2004.09.009irv5 Examplescareless_eo<-evenodd(careless_dataset,rep(5,10))careless_eodiag<-evenodd(careless_dataset,rep(5,10),diag=TRUE)irv Calculates the intra-individual response variability(IRV)DescriptionThe IRV is the"standard deviation of responses across a set of consecutive item responses for an individual"(Dunn,Heggestad,Shanock,&Theilgard,2018,p.108).By default,the IRV is calculated across all columns of the input data.Additionally it can be applied to different subsets of the data.This can detect degraded response quality which occurs only in a certain section of the questionnaire(usually the end).Whereas Dunn et al.(2018)propose to mark persons with low IRV scores as outliers-reflecting straightlining responses,Marjanovic et al.(2015)propose to mark persons with high IRV scores-reflecting highly random responses(see References).Usageirv(x,na.rm=TRUE,split=FALSE,num.split=3)Argumentsx a matrix of data(e.g.survey responses)na.rm logical indicating whether to calculate the IRV for a person with missing values.split logical indicating whether to additionally calculate the IRV on subsets of columns (of equal length).num.split the number of subsets the data is to be split in.Author(s)Francisco Wilhelm<**************************>ReferencesDunn,A.M.,Heggestad,E.D.,Shanock,L.R.,&Theilgard,N.(2018).Intra-individual Response Variability as an Indicator of Insufficient Effort Responding:Comparison to Other Indicators and Relationships with Individual Differences.Journal of Business and Psychology,33(1),105-121.doi:10.1007/s1086901694790Marjanovic,Z.,Holden,R.,Struthers,W.,Cribbie,R.,&Greenglass,E.(2015).The inter-item standard deviation(ISD):An index that discriminates between conscientious and random respon-ders.Personality and Individual Differences,84,79-83.doi:10.1016/j.paid.2014.08.0216longstringExamples#calculate the irv over all itemsirv_total<-irv(careless_dataset)#calculate the irv over all items+calculate the irv for each quarter of the questionnaire irv_split<-irv(careless_dataset,split=TRUE,num.split=4)boxplot(irv_split$irv4)#produce a boxplot of the IRV for the fourth quarterlongstring Identifies the longest string of identical consecutive responses for eachobservationDescriptionTakes a matrix of item responses and,beginning with the second column(i.e.,second item)com-pares each column with the previous one to check for matching responses.For each observation, the length of the maximum uninterrupted string of identical responses is returned.Additionally,can return the average length of uninterrupted string of identical responses.Usagelongstring(x,avg=FALSE)Argumentsx a matrix of data(e.g.item responses)avg logical indicating whether to additionally return the average length of identical consecutive responsesAuthor(s)Richard Yentes<*****************>,Francisco Wilhelm<**************************>ReferencesJohnson,J.A.(2005).Ascertaining the validity of individual protocols from web-based personality inventories.Journal of Research in Personality,39,103-129.doi:10.1016/j.jrp.2004.09.009 Examplescareless_long<-longstring(careless_dataset,avg=FALSE)careless_avg<-longstring(careless_dataset,avg=TRUE)boxplot(careless_avg$longstr)#produce a boxplot of the longstring indexboxplot(careless_avg$avgstr)mahad7 mahad Find and graph Mahalanobis Distance(D)andflag potential outliers.DescriptionTakes a matrix of item responses and computes Mahalanobis D.Can additionally return a vector of binary outlierflags.Mahalanobis distance is calculated using the function psych::outlier of the psych package,an implementation which supports missing values.Usagemahad(x,plot=TRUE,flag=FALSE,confidence=0.99,na.rm=TRUE)Argumentsx a matrix of dataplot Plot the resulting QQ graphflag Flag potential outliers using the confidence level specified in parameter confidence confidence The desired confidence level of the resultna.rm Should missing data be deletedAuthor(s)Richard Yentes<*****************>,Francisco Wilhelm<**************************>ReferencesMeade,A.W.,&Craig,S.B.(2012).Identifying careless responses in survey data.Psychological Methods,17(3),437-455.doi:10.1037/a0028085See Alsopsych::outlier on which this function is based.Examplesmahad_raw<-mahad(careless_dataset)#only the distances themselvesmahad_flags<-mahad(careless_dataset,flag=TRUE)#additionally flag outliersmahad_flags<-mahad(careless_dataset,flag=TRUE,confidence=0.999)#Apply a strict criterionpsychant Computes the psychometric antonym scoreDescriptionA convenient wrapper that calls psychsyn with argument anto=TRUE to compute the psychometricantonym score.Usagepsychant(x,critval=-0.6,diag=FALSE)Argumentsx is a matrix of item responsescritval is the minimum magnitude of the correlation between two items in order for them to be considered psychometric synonyms.Defaults to-.60 diag additionally return the number of item pairs available for each eful if dataset contains many missing values.Author(s)Richard Yentes<*****************>,Francisco Wilhelm<**************************>See Alsopsychsyn for the main function,psychsyn_critval for a helper that allows to set an adequate critical value for the size of the correlation.Examplesantonyms<-psychant(careless_dataset2,.50)antonyms<-psychant(careless_dataset2,.50,diag=TRUE)psychsyn Computes the psychometric synonym/antonym scoreDescriptionTakes a matrix of item responses and identifies item pairs that are highly correlated within the overall dataset.What defines"highly correlated"is set by the critical value(e.g.,r>.60).Each respondents’psychometric synonym score is then computed as the within-person correlation be-tween the identified item-pairs.Alternatively computes the psychometric antonym score which is a variant that uses item pairs that are highly negatively correlated.Usagepsychsyn(x,critval=0.6,anto=FALSE,diag=FALSE,resample_na=TRUE)Argumentsx is a matrix of item responsescritval is the minimum magnitude of the correlation between two items in order for them to be considered psychometric synonyms.Defaults to.60 anto determines whether psychometric antonyms are returned instead of psychomet-ric synonyms.Defaults to FALSEdiag additionally return the number of item pairs available for each e-ful if dataset contains many missing values.resample_na if psychsyn returns NA for a respondent resample to attempt getting a non-NA result.Author(s)Richard Yentes<*****************>,Francisco Wilhelm<**************************>ReferencesMeade,A.W.,&Craig,S.B.(2012).Identifying careless responses in survey data.Psychological Methods,17(3),437-455.doi:10.1037/a0028085See Alsopsychant for a more concise way to calculate the psychometric antonym score,psychsyn_critval for a helper that allows to set an adequate critical value for the size of the correlation.Examplessynonyms<-psychsyn(careless_dataset,.60)antonyms<-psychsyn(careless_dataset2,.50,anto=TRUE)antonyms<-psychant(careless_dataset2,.50)#with diagnosticssynonyms<-psychsyn(careless_dataset,.60,diag=TRUE)antonyms<-psychant(careless_dataset2,.50,diag=TRUE)10psychsyn_critval psychsyn_critval Compute the correlations between all possible item pairs and orderthem by the magnitude of the correlationDescriptionA function intended to helpfinding adequate critical values for psychsyn and psychant.Takes amatrix of item responses and returns a data frame giving the correlations of all item pairs ordered by the magnitude of the correlation.Usagepsychsyn_critval(x,anto=FALSE)Argumentsx a matrix of item responses.anto ordered by the largest positive correlation,or,if anto=TRUE,the largest nega-tive correlation.Author(s)Francisco Wilhelm<**************************>See Alsoafter determining an adequate critical value,continue with psychsyn and/or psychantExamplespsychsyn_cor<-psychsyn_critval(careless_dataset)psychsyn_cor<-psychsyn_critval(careless_dataset,anto=TRUE)Index∗datasetscareless_dataset,3careless_dataset2,4careless,2careless_dataset,3,3careless_dataset2,3,4evenodd,2,4irv,3,5longstring,3,6mahad,2,7psychant,2,8,9,10psychsyn,2,8,8,10psychsyn_critval,2,8,9,1011。

计算机毕业设计66delphi通用销售管理系统

计算机毕业设计66delphi通用销售管理系统

毕业设计<<销售管理系统>>院系______专业______班级______姓名______日期年月日中文摘要销售管理系统为企事业单位销售管理者提供充足的信息和快捷的查询平台,其开发内容主要包括后台数据库的建立和维护以及前端应用程序的开发两个方面。

利用DELPHI 6.0软件及其提供的各种面向对象的开发工具,建立完整性强、安全性好的数据库,开发出功能完备,易使用的应用程序。

经过调试、编译与实现,该程序界面友好、程序设计风格朴素,使用起来美观大方、方便易用。

尤其是系统的“查询与统计模块”的功能极大的减轻工作人员的工作量,并以快速、准确等优点取代人工操作,提高了销售管理工作效率。

关键词:DELPHI 6.0 信息管理系统数据库模块销售管理系统Sales management system for enterprises and institutions to provide adequate information and marketing managers quick search platform, including the background of its development mainly to the establishment and maintenance of database front-end applications and development 2. Delphi 6.0 and the use of object-oriented software development tools, the establishment of strong integrity, good safety database developed functions, easy to use applications. After debugging, Translation and the realization that the process friendly interface, simple programming style, using up aesthetic generous, convenient user-friendly. In particular system "enquiries and Statistics module" function greatly reduce the workload of staff, and to the rapid, accurate, and other advantages replace manually operated, and enhanced sales management efficiency.Keyword : Delphi 6.0 Information Management System database module sales management system引言 11 Delphi语言概述 21.1 Delphi简介 51.2 数据库系统简介 81.3 本应用软件的基本介绍 132 本应用程序的构成和开发步骤16 2.1 可行性研究` 182.1.1 经济可行性192.1.2 时间可行性202.1.3 技术可行性212.1.4 社会可行性222.2 数据库的建立和连接252.3 系统的总体设计302.4 系统的详细设计373 本程序的技术实现及具体功能39 3.1 登录的界面与程序设计实现42 3.2 销售管理界面与代码设计实现53 3.3 查询与统计界面与代码设计实现64 3.4 图表分析界面与代码设计743.5 主界面设计与代码设计84结论92致谢93参考文献94引言随着经济的发展,社会的进步,计算机越来越深入到我们日常的工作学习及生活中,成为我们日常生活中不可缺少的辅助工具。

微软动态AX2012R3零售与电子商务许可指南说明书

微软动态AX2012R3零售与电子商务许可指南说明书

Using This GuideUse this guide to improve your understanding of how to license Microsoft Dynamics AX 2012 R3 for Retail and eCommerce scenarios. It is not intended to guide you in choosing Microsoft Dynamics products and services. The examples presented in this guide are illustrative. Microsoft Corporation reserves the right to review and/or update the existing version without previous notice.In order to understand this document, is essential that you first read and understand the Microsoft Dynamics AX 2012 R3 Licensing Guide.For help determining the right technology solution for any given organization, including the license requirements for a specific product or scenario, consult with your Microsoft Dynamics Certified Partner or your Microsoft account team. This guide does not supersede or replace any of the legal documentation covering use rights for Microsoft products. Specific product license terms are detailed in the Software License Terms document, which is available on the on the Microsoft Dynamics AX website.ContentsLicensing the Microsoft Dynamics AX 2012 R3 Solution (3)Types of Licensing Models (3)Retail Licensing Scenarios (5)Scenario 1: Traditional Store (Brick and Mortar) (5)Scenario 2: Mobile POS (6)Scenario 3: eCommerce (6)Licensing the Microsoft Dynamics AX 2012 R3 Solution Microsoft Dynamics AX 2012 R3 introduces new capabilities for retail and eCommerce configurations. Below is a summary of some of these great improvements, additional product details can be found here.∙Modern Point of Sale (POS), assisted sales and centralized store management∙eCommerce and Social integration∙Omni-channel management∙Order management, processing and payment∙Merchandizing and catalog managementThis new release provides the perfect opportunity to introduce a new and simplified pricing and licensing model specifically designed for retail and eCommerce configurations. We are providing guidance for three separate licensing configurations, recognizing that some customers may have a mix of these configurations.∙Traditional Store (Brick and Mortar stores)∙Mobile POS∙eCommerceThis brief focuses on the required licenses for such Retail and e-Commerce specific configurations. In any of these scenarios, you must still be properly-licensed for the underlying Microsoft Dynamics AX 2012 R3. For additional Microsoft Dynamics AX 2012 R3 licensing details, please refer to this link.T YPES OF L ICENSING M ODELSThe Microsoft Dynamics AX 2012 R3 licensing utilizes the Server + Client Access License (CAL) model. This same model will be used for the Traditional Store and Mobile POS solutions. With the Microsoft Dynamics AX 2012 R3 eCommerce solution we will be introducing a Per Core licensing model. Below are high level descriptions of these two licensing models.Server + CAL Licensing ExplanationFor the Microsoft Dynamics AX 2012 R3 solution you need to license Server plus CALs: ∙Microsoft Dynamics AX 2012 R3 solution functionality is licensed through the Microsoft Dynamics AX 2012 R3 Server license. Each running instance of the Microsoft Dynamics AX 2012 R3 Serversoftware requires a Server license.∙Direct or Indirect Access to the Microsoft Dynamics AX 2012 R3 solution functionality by users or devices is licensed through CALs. Every user or device accessing the solution functionality—whether directly or indirectly—must be covered by a CAL.Figure 1: Server + CAL LicensingPlease note that this traditional Server/CAL model will be utilized for the Traditional Store and Mobile POS scenarios. See below for specific scenario descriptions.Per Core Licensing ExplanationeCommerce Servers are licensed based on computing power, as measured by processing cores. Core-based licensing provides a more precise measure of computing power than processors and a more consistent licensing metric, regardless of whether solutions are deployed on physical servers on-premises, or in virtual or cloud environments.Under the Per Core licensing model, each eCommerce Server must be assigned an appropriate number of Microsoft Dynamics AX 2012 R3 Standard Commerce Core licenses. The number of core licenses needed depends on whether you are licensing the physical server or individual virtual Operating System Environments (OSEs).Note: Microsoft Dynamics AX 2012 R3 Standard Commerce Core licenses are sold in packs of two.You have the following two options for licensing under the per core licensing model: ∙Individual Virtual Operating System Environment (OSE). You can license based on individual virtual OSEs within the servers that are running the server software. If you choose this option, for each virtual OSE in which you run the server software, you need a number of licenses equal to the number of virtual cores in the virtual OSE, subject to a minimum requirement of four licenses per virtual OSE. In addition, if any of these virtual cores is at any time mapped to more than onehardware thread, you need a license for each additional hardware thread mapped to that virtual core. Those licenses count toward the minimum requirement of four licenses per virtual OSE.∙Physical Cores on a Server. You can license based on all of the physical cores on the server. If you choose this option, the number of licenses required equals the number of physical cores on the server multiplied by the applicable core factor located in the Core Factor Table.For more information about this licensing model, refer to the “Introduction to Per Core Licensing and Basic Definitions” Volume Licensing Brief.Retail Licensing ScenariosThe scenarios below help to illustrate of how to license Microsoft Dynamics AX 2012 R3 in three common retail scenarios: a Traditional Store, Mobile POS and eCommerce site. While we are providing guidance for three separate licensing configurations, we recognize that some customers may have a mix of these configurations.Note that these illustrations are intended to provide a conceptual understand of the licensing policies. They do not serve as actual deployment diagrams. For instance, where a single server is shown to illustrate the need for Server licenses, an actual solution deployment will like require multiple servers running instances of the software, and thus requiring licenses.S CENARIO 1:T RADITIONAL S TORE (B RICK AND M ORTAR)In this scenario, the Microsoft Dynamics AX 2012 R3 solution is being run on central servers at headquarters. Any servers running instances of the Microsoft Dynamics AX 2012 R3 software require one Server license per running instance.Each store locations will need to license a Microsoft Dynamics AX 2012 R3 Store Server which provides access to the following new Microsoft Dynamics AX 2012 R3 capabilities:∙Local caching for offline use of data∙Centralization of POS in the store∙Local management of items such as promotionsStore devices and employees accessing the Microsoft Dynamics AX 2012 R3 solution functionality require CALs as defined in the Microsoft Dynamics AX 2012 R3 Product Use Rights (PUR).Figure 2: Traditional Store ConfigurationS CENARIO 2:M OBILE POSIn this scenario, a retail company has Mobile POS devices directly connect to the central Microsoft Dynamics AX 1012 R3 solution running at the company headquarters. As always, any servers running instance of the Microsoft Dynamics AX 2012 R3 software require one Server license per running instance. These users and devices require CALs to access the solution functional and should be licensed in accordance with the Product Use Rights (PUR).Figure 3: Mobile POSS CENARIO 3: E C OMMERCEIn this scenario, the Microsoft Dynamics AX 2012 R3 solution is being run on central servers at headquarters. Any servers running instance of the Microsoft Dynamics AX 2012 R3 software require one Server license per running instance.The Microsoft Dynamics AX 2012 R3 Standard Commerce Core Server should be licensed for all eCommerce scenarios. Each Microsoft Dynamics AX 2012 R3 Standard Commerce Core Server must be assigned an appropriate number of Microsoft Dynamics AX 2012 R3 Standard Commerce Core licenses as explained above.Store devices and employees accessing the Microsoft Dynamics AX 2012 R3 solution functionality require CALs as defined in the Microsoft Dynamics AX 2012 R3 Product Use Rights (PUR).External users (customers) do not require CALs.Figure 4: eCommerce© 2014 Microsoft Corporation. All rights reserved.This document is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS DOCUMENT. This information is provided to help guide your authorized use of products you license; it is not your agreement. Your use of products licensed under your volume license agreement is governed by the terms and conditions of that agreement. In the case of any conflict between this information and your agreement, the terms and conditions of your agreement control. Prices for licenses acquired through Microsoft resellers are determined by the reseller.。

计量经济学课后习题答案汇总

计量经济学课后习题答案汇总

计量经济学练习题第一章导论一、单项选择题⒈计量经济研究中常用的数据主要有两类:一类是时间序列数据,另一类是【 B 】A 总量数据B 横截面数据C平均数据 D 相对数据⒉横截面数据是指【A 】A 同一时点上不同统计单位相同统计指标组成的数据B 同一时点上相同统计单位相同统计指标组成的数据C 同一时点上相同统计单位不同统计指标组成的数据D 同一时点上不同统计单位不同统计指标组成的数据⒊下面属于截面数据的是【D 】A 1991-2003年各年某地区20个乡镇的平均工业产值B 1991-2003年各年某地区20个乡镇的各镇工业产值C 某年某地区20个乡镇工业产值的合计数D 某年某地区20个乡镇各镇工业产值⒋同一统计指标按时间顺序记录的数据列称为【B 】A 横截面数据B 时间序列数据C 修匀数据D原始数据⒌回归分析中定义【 B 】A 解释变量和被解释变量都是随机变量B 解释变量为非随机变量,被解释变量为随机变量C 解释变量和被解释变量都是非随机变量D 解释变量为随机变量,被解释变量为非随机变量二、填空题⒈计量经济学是经济学的一个分支学科,是对经济问题进行定量实证研究的技术、方法和相关理论,可以理解为数学、统计学和_经济学_三者的结合。

⒉⒊现代计量经济学已经形成了包括单方程回归分析,联立方程组模型,时间序列分析三大支柱。

⒋⒌经典计量经济学的最基本方法是回归分析。

计量经济分析的基本步骤是:理论(或假说)陈述、建立计量经济模型、收集数据、计量经济模型参数的估计、检验和模型修正、预测和政策分析。

⒍⒎常用的三类样本数据是截面数据、时间序列数据和面板数据。

⒏⒐经济变量间的关系有不相关关系、相关关系、因果关系、相互影响关系和恒等关系。

三、简答题⒈什么是计量经济学?它与统计学的关系是怎样的?计量经济学就是对经济规律进行数量实证研究,包括预测、检验等多方面的工作。

计量经济学是一种定量分析,是以解释经济活动中客观存在的数量关系为内容的一门经济学学科。

率的多重比较方法评价

率的多重比较方法评价

新疆医科大学硕士学位论文率的多重比较方法评价姓名:吴苏河申请学位级别:硕士专业:流行病学与卫生统计学指导教师:薛茜2010-04摘 要率的多重比较方法评价研究生:吴苏河导师:薛茜教授摘要目的:针对医学数据处理中经常遇到的样本率的多重比较问题,在己建立的30种样本率多重比较方法中,选择Bonferroni法、杜养志法、SNK-Zar法、Benjamini-Hochberg法和Bootstrap法这5种常见的有代表性的检验方法,探索、评价其适用条件,为实际工作中样本率的多重比较方法的选择提供参考依据。

方法:本研究通过蒙特卡洛法,在二项分布的基础上,按照既定的参数组合模拟抽样,每种参数组合条件重复抽样10000次。

应用SAS9.1软件对5种较常见的多重比较方法编程并以其对抽样数据进行统计推断,结合总I型错误率、检验功效和错误的发现率三个评价指标评价其在各种参数组合条件下的结果。

结果:Bonferroni法使用简单,但在比较组数较多时结果偏保守;杜养志法在比较组数和样本含量较大时,不能很好的控制错误的发现率;SNK-Zar法在比较组数为4和5时不能严格控制总I型错误率在0.05水准;Benjamini-Hochberg法能很好的控制总I型错误率和错误的发现率;Bootstrap法在各种情况下比较稳定,但不是最好。

结论:当需要严格控制总I型错误率时,可以选用Bonferroni法;当需要较高的检验效能时,可以选用Bootstrap法;当需要严格控制错误的发现率时,可以选用Benjamini-Hochberg法。

关键词:多重比较;总I型错误率;错误的发现率;新疆医科大学医学硕士学位论文Comparative estimation study on MultipleComparisons ProceduresPostgraduate:Wu Suhe Tutor:Pro.Xue QianAbstractObjectives:The comparison is a common problem in medical data processing. Comparing sevearl rate relies on multiple comparisons procedures. Alhtough over 5 methods had been proposed,there has been no consensus on which one is worth advocating at present in biosattisties. Therefore,exploring the advantages and disadvantages of each method by simulations is the main task in this research.Methods:There are 5 methods:Bonferroni、Du、SNK-Zar、Benjamini-Hochberg、Bootstrap. We systematically varied the number of hypotheses,the proportion of false null hypotheses,sample size in data simulations and examined the effect of each of these factors on FWER、FDR、Power.each is repeated 10,000 times.Results:Bonferroni used easily,but the result is to conservation;Du do not control the FDR at the desired 0.05 level;Bootstrap could used widely,but the result is to conservation;Conclusion:Unfortunately,however,a uniformly ”best”method of these examined does not exist.we should select appropriate methods according to the actual demands.Keywords:Multiple Comparison,Type I Family-wise Error Rate,Fasle Discovery Rate论文独创性说明本人申明所呈交的学位论文是在我个人在导师的指导下进行的研究工作及取得的研究成果。

R.P.S. Corporation 34 产品系列说明书

R.P.S. Corporation 34 产品系列说明书

3
4 18
16
17
SIDE BROOM MOTOR ASSY P/N 4-425
34 Parts Manual V3.1
Item
Part No.
1
4-111
2
4-113
3
4-220
4
4-223
5
4-376
6
4-378
7
4-381
8
4-385
9
4-386
10
4-402
11
4-404
12
4-436
13
Description MAIN BODY R. INNER SIDE PANEL SIDE BROOM MOTOR BROOM SAFETY DRIVE CPLG BROOM LIFT LEVER BROOM LIFT SHAFT LIFT LEVER BASE, RT LIFT LEVER COLLAR LIFT LEVER SPRING SIDE BROOM POLYPROPYLENE SIDE BROOM CLUTCH PLATE ADJ. BROOM LIFT BROOM MOUNTING BLOCK SIDE BROOM ARM BROOM RETAINING PLATE BROOM MOTOR MOUNT BROOM SHAFT DRIVE TUBE BEARING GROMMET-3/8 ID X 1-1/2 OD X .375 HANDLE GRIP SIDE BROOM LONG LINK ADJUSTMENT KNOB RIGHT FLOOR SEAL RH SEAL STRIP 3/8" X 1 1/4" SPRING PIN 1/4" X 1 1/2" CLEVIS PIN 1/4" X 1 1/4" SPRING PIN FB CASHD 1/4"-20 X 1/2" 1/2" X 1" CLEVIS PIN HCN 1/4"-20 SSS CP 1/4"-20 X 3/8" NYLOK #10-32 NYLOK 1/4"-20 HITCH PIN CLIP 9 HCS 1/4"-20 X 1/2 SS" HN 5/16"- 18 SS NYLOK 5/16" - 18 SS FW 5/8 ID X 1-1/2 OD SS LW 1/4" SS PPH #10-32 X 5/8" SS FHP 1/4"- 20 X 1-3/4" PPH SMS #8 X 3/4" SS #4 HAIR PIN HCS M6-1.0 X 14 SS 3/8" X 1 1/2" CLEVIS PIN

IBM Cognos Transformer V11.0 用户指南说明书

IBM Cognos Transformer V11.0 用户指南说明书
Dimensional Modeling Workflow................................................................................................................. 1 Analyzing Your Requirements and Source Data.................................................................................... 1 Preprocessing Your ...................................................................................................................... 2 Building a Prototype............................................................................................................................... 4 Refining Your Model............................................................................................................................... 5 Diagnose and Resolve Any Design Problems........................................................................................ 6

stata store的用法

stata store的用法

stata store的用法Stata Store的用法什么是Stata StoreStata Store是Stata公司推出的一个在线资源中心,为Stata 用户提供各种可下载的增强包、数据集和图形。

这些资源能够帮助用户更高效地使用Stata软件进行数据分析和统计建模。

Stata Store的访问方式用户可以通过以下方式访问Stata Store:1.在Stata软件中直接通过命令行访问:. stata store browse2.在Stata官网上访问Stata Store页面:Stata Store的主要功能Stata Store为用户提供了以下主要功能:1. 增强包下载在Stata Store上,用户可以找到各种增强包,包括统计工具、图形模板、数据处理工具等。

用户可以通过点击对应的包名进入详情页,然后点击下载按钮获取所需的增强包。

2. 数据集下载Stata Store中还提供了大量的免费数据集供用户下载。

用户可以根据自己的需求,在数据集页面进行搜索和筛选,然后点击下载按钮获取相应的数据集文件。

3. 图形模板下载用户可以在Stata Store的图形模板页面中找到各种已经设计好的图形模板,比如散点图、柱状图、线图等。

用户可以根据自己的研究需求,选择适合的图形模板,然后点击下载按钮获取模板文件。

4. Stata图形命令示例Stata Store中的图形命令示例页面提供了各种常见的图形命令示例,如绘制柱状图、折线图、散点图等。

用户可以通过浏览示例页面,学习和借鉴各种Stata图形命令的使用方法。

5. 购买Stata软件和书籍Stata Store还提供了购买Stata软件和相关书籍的功能。

用户可以在Stata软件页面和书籍页面中选择自己需要的产品,然后点击购买按钮进行购买。

结语通过Stata Store,用户不仅可以获取各种增强包、数据集和图形模板,还能够学习和借鉴Stata图形命令的使用方法。

相关主题
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

False Discovery RatesJohn D.StoreyPrinceton University,Princeton,USAJanuary2010Multiple Hypothesis TestingIn hypothesis testing,statistical significance is typically based on calculations involving p-values and Type I error rates.A p-value calculated from a single statistical hypothesis test can be used to determine whether there is statistically significant evidence against the null hypothesis.The upper threshold applied to the p-value in making this determination(often5%in the scientific literature)determines the Type I error rate;i.e.,the probability of making a Type I error when the null hypothesis is true.Multiple hypothesis testing is concerned with testing several statistical hypotheses simultaneously.Defining statistical significance is a more complex problem in this setting.A longstanding definition of statistical significance for multiple hypothesis tests involves the probability of making one or more Type I errors among the family of hypothesis tests,called the family-wise error rate.However,there exist other well established formulations of statistical significance for multiple hypothesis tests.The Bayesian framework for classification naturally allows one to calculate the probability that each null hypothesis is true given the observed data (Efron et al.2001,Storey2003),and several frequentist definitions of multiple hypothesis testing significance are also well established(Shaffer1995).Soric(1989)proposed a framework for quantifying the statistical significance of multiple hy-pothesis tests based on the proportion of Type I errors among all hypothesis tests called statistically significant.He called statistically significant hypothesis tests discoveries and proposed that one be concerned about the rate of false discoveries1when testing multiple hypotheses.This false discov-ery rate is robust to the false positive paradox and is particularly useful in exploratory analyses, where one is more concerned with having mostly truefindings among a set of statistically significant discoveries rather than guarding against one or more false positives.Benjamini&Hochberg(1995) provided thefirst implementation of false discovery rates with known operating characteristics. The idea of quantifying the rate of false discoveries is directly related to several pre-existing ideas, such as Bayesian misclassification rates and the positive predictive value(Storey2003).1A false discovery,Type I error,and false positive are all equivalent.Whereas the false positive rate and Type I error rate are equal,the false discovery rate is an entirely different quantity.1ApplicationsIn recent years,there has been a substantial increase in the size of data sets collected in a number of scientific fields,including genomics,astrophysics,brain imaging,and spatial epidemiology.This has been due in part to an increase in computational abilities and the invention of various technologies,such as high-throughput biological devices.The analysis of high-dimensional data sets often involves performing simultaneous hypothesis tests on each of thousands or millions of measured variables.Classical multiple hypothesis testing methods utilizing the family-wise error rate were developed for performing just a few tests,where the goal is to guard against any single false positive occurring.However,in the high-dimensional setting,a more common goal is to identify as many true positive findings as possible,while incurring a relatively low number of false positives.The false discovery rate is designed to quantify this type of trade-off,making it particularly useful for performing many hypothesis tests on high-dimensional data sets.Hypothesis testing in high-dimensional genomics data sets has been particularly influential in increasing the popularity of false discovery rates (Storey &Tibshirani 2003).For example,DNA microarrays measure the expression levels of thousands of genes from a single biological sample.It is often the case that microarrays are applied to samples collected from two or more biological conditions,such as from multiple treatments or over a time course.A common goal in these studies is to identify genes that are differentially expressed among the biological conditions,which involves performing a hypothesis tests on each gene.In addition to incurring false positives,failing to identify truly differentially expressed genes is a major concern,leading to the false discovery rate being in widespread use in this area.Mathematical DefinitionsAlthough multiple hypothesis testing with false discovery rates can be formulated in a very general sense (Storey 2007,Storey et al.2007),it is useful to consider the simplified case where m hypothesis tests are performed with corresponding p-values p 1,p 2,...,p m .The typical procedure is to call null hypotheses statistically significant whenever their corresponding p-values are less than or equal to some threshold,t ∈(0,1].This threshold can be fixed or data-dependent,and the procedure for determining the threshold involves quantifying a desired error rate.Table 1describes the various outcomes that occur when applying this approach to determining which of the m hypothesis tests are statistically significant.Specifically,V is the number of Type I errors (equivalently false positives or false discoveries)and R is the total number of significant null hypotheses (equivalently total discoveries).The family-wise error rate (FWER)is defined to beFWER =Pr (V ≥1),and the false discovery rate (FDR)is usually defined to be (Benjamini &Hochberg 1995):FDR =E V R ∨1 =E V RR >0 Pr (R >0).2Table1:Possible outcomes from m hypothesis tests based on applying a significance threshold t∈(0,1]to their corresponding p-values.Not Significant(p-value>t)Significant(p-value≤t)Total Null True U V m0 Alternative True T S m1W R mThe effect of“R∨1”in the denominator of thefirst expectation is to set V/R=0when R=0. As demonstrated in Benjamini&Hochberg(1995),the FDR offers a less strict multiple testing criterion than the FWER,which is more appropriate for some applications.Two other false discovery rate definitions have been proposed in the literature,where the main difference is in how the R=0event is handled.These quantities are called the positive false discovery rate(pFDR)and the marginal false discovery rate(mFDR),and they are defined as follows(Storey2003,Storey2007):pFDR=EVRR>0,mFDR=E[V] E[R].Note that pFDR=mFDR=1whenever all null hypotheses are true,whereas FDR can always be made arbitrarily small because of the extra term Pr(R>0).Some have pointed out that this extra term in the FDR definition may lead to misinterpreted results,and pFDR or mFDR offer more scientifically relevant values(Zaykin et al.1998,Storey2003);others have argued that FDR is preferable because it allows for the traditional strong control criterion to be met(Benjamini& Hochberg1995).All three quantities can be utilized in practice,and they are all similar when the number of hypothesis tests is particularly large.Control and EstimationThere are two approaches to utilizing false discovery rates in a conservative manner when determin-ing multiple testing significance.One approach is tofix the acceptable FDR level beforehand,and find a data-dependent thresholding rule so that the expected FDR of this rule over repeated studies is less than or equal to the pre-chosen level.This property is called FDR control(Shaffer1995,Ben-jamini&Hochberg1995).Another approach is tofix the p-value threshold at a particular value and then form a point estimate of the FDR whose expectation is greater than or equal to the true FDR at that particular threshold(Storey2002).The latter approach has been useful in that it places multiple testing in the more standard context of point estimation,whereas the derivation of3algorithms in the former approach may be less tractable.Indeed,it has been shown that the point estimation approach provides a comprehensive and unified framework(Storey et al.2004).For thefirst approach,Benjamini&Hochberg(1995)proved that the following algorithm for determining a data based p-value threshold controls the FDR at levelαwhen the p-values corre-sponding to true null hypotheses are independent and identically distributed(i.i.d.)Uniform(0,1). Other p-value threshold determining algorithms for FDR control have been subsequently studied (e.g.,Benjamini&Liu1999).This algorithm was originally introduced by Simes(1986)to control the FWER when all p-values are independent and all null hypotheses are true,although it also provides control of the FDR for any configuration of true and false null hypotheses.FDR Controlling Algorithm(Simes1986;Benjamini&Hochberg1995)1.Let p(1)≤...≤p(m)be the ordered,observed p-values.2.Calculate k=max{1≤k≤m:p(k)≤α·k/m}.3.If k exists,then reject null hypotheses corresponding to p(1)≤...≤p(b k).Otherwise,rejectnothing.To formulate the point estimation approach,let FDR(t)denote the FDR when calling null hypotheses significant whenever p i≤t,for i=1,2,...,m.For t∈(0,1],we define the following stochastic processes based on the notation in Table1:V(t)=#{true null p i:p i≤t},R(t)=#{p i:p i≤t}.In terms of these,we haveFDR(t)=EV(t)R(t)∨1.Forfixed t,Storey(2002)provided a family of conservatively biased point estimates of FDR(t):FDR(t)= m0(λ)·t [R(t)∨1].The term m0(λ)is an estimate of m0,the number of true null hypotheses.This estimate depends on the tuning parameterλ,and it is defined asm0(λ)=m−R(λ)(1−λ).It can be shown that E[ m0(λ)]≥m0when the p-values corresponding to the true null hypotheses are Uniform(0,1)distributed(or stochastically greater).There is an inherent bias/variance trade-offin the choice ofλ.In most cases,whenλgets smaller,the bias of m0(λ)gets larger,but the variance gets smaller.Therefore,λcan be chosen to try to balance this trade-off.Storey&4Tibshirani (2003)provide an intuitive motivation for the m 0(λ)estimator,as well as a method for smoothing over the m 0(λ)to obtain a tuning parameter free m 0estimator.Sometimes instead of m 0,the quantity π0=m 0/m is estimated,where simply π0(λ)= m 0(λ)/m .To motivate the overall estimator FDR(t )= m 0(λ)·t/[R (t )∨1],it may be noted that m 0(λ)·t ≈V (t )and [R (t )∨1]≈R (t ).It has been shown under a variety of assumptions,including those of Benjamini &Hochberg (1995),that the desired property E FDR(t ) ≥FDR(t )holds.Storey et al.(2004)have shown that the two major approaches to false discovery rates can beunified through the estimator FDR(t ).Essentially,the original FDR controlling algorithm can be obtained by setting m 0=m and utilizing the p-value threshold t ∗α=max t : FDR(t )≤α .Byallowing for the different estimators m 0(λ),a family of FDR controlling procedures can be derived in this manner.In the asymptotic setting where the number of tests m is large,it has also been shown that the two approaches are essentially equivalent.Q-valuesIn single hypothesis testing,it is common to report the p-value as a measure of significance.The “q-value”is the FDR based measure of significance that can be calculated simultaneously for multiple hypothesis tests.Initially it seems that the q-value should capture the FDR incurred when the significance threshold is set at the p-value itself,FDR(p i ).However,unlike Type I error rates,the FDR is not necessarily strictly increasing with an increasing significance threshold.To accommodate this property,the q-value is defined to be the minimum FDR (or pFDR)at which the test is called significant (Storey 2002,Storey 2003):q-value(p i )=min t ≥p i FDR(t )or q-value(p i )=min t ≥p ipFDR(t ).To estimate this in practice,a simple plug-in estimate is formed,for example:q -value(p i )=min t ≥p iFDR(t ).Various theoretical properties have been shown for these estimates under certain conditions,notably that the estimated q-values of the entire set of tests are simultaneously conservative as the number of hypothesis tests grows large (Storey et al.2004).Bayesian DerivationThe pFDR has been shown to be exactly equal to a Bayesian derived quantity measuring the probability that a significant test is a true null hypothesis.Suppose that (a)H i =0or 1according towhether the i th null hypothesis is true or not,(b)H i i.i.d.∼Bernoulli(1−π0)so that Pr (H i =0)=π0and Pr (H i =1)=1−π0,and (c)P i |H i i.i.d.∼(1−H i )·G 0+H i ·G 1,where G 0is the null distribution5and G 1is the alternative distribution.Storey (2001,2003)showed that in this scenariopFDR(t )=E V (t )R (t )R (t )>0 =Pr (H i =0|P i ≤t ),where Pr (H i =0|P i ≤t )is the same for each i because of the i.i.d.assumptions.Under these modeling assumptions,it follows that q-value(p i )=min t ≥p i Pr (H i =0|P i ≤t ),which is a Bayesian analogue of the p-value –or rather a “Bayesian posterior Type I error rate.”Related concepts were suggested as early as Morton (1955).In this scenario,it also follows that pFDR(t )= Pr (H i =0|P i =p i )dG (p i |p i ≤t ),where G =π0G 0+(1−π0)G 1.This connects the pFDR to the posterior error probability Pr (H i =0|P i =p i ),making this latter quantity sometimes interpreted as a local false discovery rate (Efron et al.2001,Storey 2001).DependenceMost of the existing procedures for utilizing false discovery rates in practice involve assumptions about the p-values being independent or weakly dependent.An area of current research is aimed at performing multiple hypothesis tests when there is dependence among the hypothesis tests,specif-ically at the level of the data collected for each test or the p-values calculated for each test.Recent proposals suggest modifying FDR controlling algorithms or extending their theoretical characteriza-tions (Benjamini &Yekutieli 2001),modifying the null distribution utilized in calculating p-values (Devlin &Roeder 1999,Efron 2004),or accounting for dependence at the level of the originally observed data in the model fitting (Leek &Storey 2007,Leek &Storey 2008).ReferencesBenjamini,Y.&Hochberg,Y.(1995).Controlling the false discovery rate:A practical and powerful approach tomultiple testing,Journal of the Royal Statistical Society,Series B 85:289–300.Benjamini,Y.&Liu,W.(1999).A step-down multiple hypothesis procedure that controls the false discovery rateunder independence,J.Stat.Plan.and Inference 82:163–170.Benjamini,Y.&Yekutieli,D.(2001).The control of the false discovery rate in multiple testing under dependency,Annals of Statistics 29:1165–1188.Devlin,B.&Roeder,K.(1999).Genomic control for association studies,Biometrics 55:997–1004.Efron, B.(2004).Large-scale simultaneous hypothesis testing:The choice of a null hypothesis,Journal of theAmerican Statistical Association 99:96–104.Efron,B.,Tibshirani,R.,Storey,J.D.&Tusher,V.(2001).Empirical Bayes analysis of a microarray experiment,Journal of the American Statistical Association 96:1151–1160.Leek,J.T.&Storey,J.D.(2007).Capturing heterogeneity in gene expression studies by surrogate variable analysis,PLoS Genetics 3:e161.Leek,J.T.&Storey,J.D.(2008).A general framework for multiple testing dependence.,Proceedings of the NationalAcademy of Sciences 105:18718–18723.6Morton,N.E.(1955).Sequential tests for the detection of linkage,American Journal of Human Genetics7:277–318.Shaffer,J.(1995).Multiple hypothesis testing,Annual Rev.Psych.46:561–584.Simes,R.J.(1986).An improved Bonferroni procedure for multiple tests of significance,Biometrika73:751–754.Soric,B.(1989).Statistical discoveries and effect-size estimation,Journal of the American Statistical Association 84:608–610.Storey,J.D.(2001).The positive false discovery rate:A Bayesian interpretation and the q-value.Technical Report 2001-12,Department of Statistics,Stanford University.Storey,J.D.(2002).A direct approach to false discovery rates,Journal of the Royal Statistical Society,Series B 64:479–498.Storey,J.D.(2003).The positive false discovery rate:A Bayesian interpretation and the q-value,Annals of Statistics 31:2013–2035.Storey,J.D.(2007).The optimal discovery procedure:A new approach to simultaneous significance testing,Journal of the Royal Statistical Society,Series B69:347–368.Storey,J.D.,Dai,J.Y.&Leek,J.T.(2007).The optimal discovery procedure for large-scale significance testing, with applications to comparative microarray experiments,Biostatistics8:414–432.Storey,J.D.,Taylor,J.E.&Siegmund,D.(2004).Strong control,conservative point estimation,and simultaneous conservative consistency of false discovery rates:A unified approach,Journal of the Royal Statistical Society, Series B66:187–205.Storey,J.D.&Tibshirani,R.(2003).Statistical significance for genome-wide studies,Proceedings of the National Academy of Sciences100:9440–9445.Zaykin,D.V.,Young,S.S.&Westfall,P.H.(1998).Using the false discovery approach in the genetic dissection of complex traits:A response to weller et al.,Genetics150:1917–1918.7。

相关文档
最新文档