Force distributions in three dimensional compressible granular packs
人类发展不平衡英文版
Inequality-adjusted Human Development Index (IHDI)Reflecting inequality in each dimension of the HDI addresses an objective first stated in the Human Development Report 1990. The 2010 Report introduced the Inequality-adjusted HDI (IHDI), a measure of the level of human development of people in a society that accounts for inequality. Under perfect equality the IHDI is equal to the HDI, but falls below the HDI when inequality rises. In this sense, the IHDI is the actual level of human development (taking into account inequality), while the HDI can be viewed as an index of the potential human development that could be achieved if there is no inequality. The IHDI accounts for inequality in HDI dimensions by “discounting” each dimension’s av erage value according to its level of inequality measured by the Atkinson index. We apply this index to 134 countries.Frequently Asked Questions (FAQs) about the Inequality-adjusted HDI (IHDI)∙What is the Inequality-adjusted HDI (IHDI)?The Inequality-adjusted Human Development Index(IHDI) adjusts the Human Development Index (HDI) for inequality in distribution of each dimension across the population. The IHDI accounts for inequalities in HDI dimens ions by “discounting” each dimension’s average value according to its level of inequality. The IHDI equals the HDI when there is no inequality across people but is less than the HDI as inequality rises. In this sense, the IHDI is the actual level of human development (accounting for this inequality), while the HDI can be viewed as an index of “potential” human development (or the maximum level of HDI) that could be achieved if there was no inequality. The “loss” in potential human development due to inequality is given by the difference between the HDI and the IHDI and can be expressed as a percentage.∙What is the purpose of an Inequality-adjusted HDI (IHDI)?The HDI represents a national average of human development achievements in the three basic dimensions making up the HDI: health, education and income. Like all averages, it conceals disparities in human development across the population within the same country. Two countries with different distributions of achievements can have the same average HDI value. The IHDI takes into account not only the average achievements of a country on health, education and income, but also how those achievements are distributed among its citizens by “discounting” each dimension’s average value according to its level of inequality.∙What are the results of the IHDI regarding HDI achievements globally and regionally?The average world loss in HDI due to inequality is about 23%—ranging from 5% (Czech Republic) to 43.5% (Namibia). People in sub-Saharan Africa suffer the largest losses due to inequality in all three dimensions, followed by South Asia and the Arab States. Sub-Saharan Africa suffers the highest inequality in health, while South Asia and Arab States have considerable losses due to unequal distribution in education. Latin America and the Caribbean suffers the largest loss of any region due to inequality in income (39.3%).∙Which countries and regions are the least equal and which are most equal?Generally countries with less human development also have more multidimensional inequality and thus larger losses in human development due to inequality, while people in developed countries experience the least inequality in human development. The East Asia and the Pacific Region performs well on the IHDI, particularly in access to healthcare and education, and former socialist countries in Europe and Central Asia have relatively egalitarian distributions across all three dimensions.∙Does the IHDI show if inequality is getting better or worse?Although this is the second year that the IHDI has been calculated, we didn’t recalculate the 2010 IHDI from the consistent series as we did for HDI trends. This is mostly because the inequality in education and income for many countries was estimated using the same sources in both years. Future versions of the IHDI will allow for comparisons over time.∙How is the IHDI measured?The approach is based on a distribution-sensitive class of composite indices proposed by Foster, Lopez-Calva, and Szekely (2005), which drawson the Atkinson (1970) family of inequality measures. It is computed as the geometric mean of dimension indices adjusted for inequality. The inequality in each dimension is estimated by the Atkinson inequality measure, which is based on the assumption that a society has a certain level of aversion to inequality. (For details see Alkire and Foster(2010) and Technical notes [418 KB] in HDR 2011)∙What are the sources of data used for calculating the IHDI?The IHDI relies on data on income/consumption and years of schooling from major publicly available databases, which contain national household surveys harmonized to common international standards: Eurostat’s EU Survey on Income and Living Conditions, Luxembourg Income Study, World Bank’s International Income Distribution Database, United Nations Children’s Fund’s Multiple Indicators Cluster Survey, US Agency for International Development’s Demographic and Health Survey, World Health Organization’s World Health Survey, and United Nations University’s World Income Inequality Database. For inequality in the health dimension, we used the abridged life tables from the United Nations Population Division. See: List of surveys used for 2011 IHDI estimation[277 KB].∙What is the reference year for the IHDI?The IHDI uses the HDI indicators that refer to 2011 and measures of inequality that are based on household surveys from 2000 to 2009 and life tables that refer to the 2010-2015 period. So, the logic was to use the year to which the HDI indicators refer to, especially because we report the inequality-adjusted indicators/indices in tables.∙How should the IHDI be interpreted?While the HDI can be viewed as an index of “potential” human development that could be obtained if achievements were distributed equally, the IHDI is the actual level of human development (accounting for inequality in the distribution of achievements across people in a society). The IHDI will be equal to the HDI when there is no inequality in the distribution of achievement across people in society, but falls below the HDI as inequality rises. The loss in potential human development due to inequality is the difference between the HDI and IHDI, expressed as a percentage.∙What are the limitations of the IHDI?The IHDI captures the inequality in distribution of the HDI dimensions. However, it is not association sensitive, meaning it does not account foroverlapping inequalities—whether the same people experience the multiple deprivations. Also, the individual values of indicators such as income can be zero or even negative, so they have been adjusted to non-negative non-zero values uniformly across countries.∙What is the policy relevance of the IHDI?The IHDI allows a direct link to inequalities in dimensions of the HDI to the resulting loss in human development, and thus it can help inform policies towards inequality reduction and to evaluate the impact of various policy options aimed at inequality reduction.∙Is the IHDI approach useful to UNDP at the country level?The IHDI and its components can be useful as a guide to helping governments better understand the inequalities across populations and their contribution to the overall loss of inequality.∙Can the indicators be adapted at the country level?The IHDI in its current form was inspired by a similar index produced by Mexico’s national H DR. The IHDI can be adapted to compare the inequalities in different subpopulations within a country, providing that the appropriate data are available. National teams can use proxy distributions for indicators, which may make more sense in their particular case.∙Will the IHDI become a permanent feature of UNDP’s global HDR?The IHDI is one of three experimental indices introduced in 2010, alongside the Gender Inequality Index and the Multidimensional Poverty Index. It will be revised and improved in light of feedback and data availability.∙How do you assess inequality in the distribution of life expectancy at birth?This is the most difficult aspect as life expectancy data are aggregate indicators. However, the inequality is estimated from the abridged life table (usually five-year age cohort) data and reflects the current inequality in mortality patterns—some people die under the age of one and others die at 75 or later. Undoubtedly, the quality of these estimates is no better than the data in the life table itself.∙What important properties does this methodology have?One of the key properties of the approach is that it is “subgroup consistent.” This means that if inequality declines in one subgroup and remains unchanged in the rest of population, then the overall inequality declines. The second important property is that the IHDI can be obtained by first computing inequality for each dimension and then across dimensions, which further implies that it can be computed by combining data from different sources.∙Is the Gini coefficient not a sufficient measure of inequality? What is the difference between the Gini and Atkinson measures ofinequality?The Gini index is commonly used as a measure of inequality of income, consumption or wealth. There was an attempt to apply the Gini index to measure multidimensional inequality (Hicks, 1998). However, the resulting index was not consistent for all subgroups. Moreover, the Gini index does not emphasize the lower part of the distribution, but instead places the same weight throughout the distribution.∙Does the IHDI capture all inequalities in the HDI dimensions?No. Due to data limitations, the IHDI does not capture all overlapping inequalities—whether the same person experiences one or multiple deprivations.∙For some countries the assessment of inequality in the income dimension is based on household consumption, and for others it is based on income distribution. Are these inequalities comparable?By their very nature, income and consumption yield different levels of inequalities, with income inequality being higher than inequality in consumption. Income seems to correspond more naturally to the notion of “command over resources.” Consumption data are arguably more accurate in developing countries, less skewed by high values, and directly reflect the conversion of resources. Income data also pose technical challenges because of the greater presence of zero and negative values. In an ideal world, one would be consistent in the use of either income or consumption data to estimate inequality. However, to obtain sufficient country coverage, it was necessary to use both. The final estimates are lightly influenced by whether the data are income or consumption.∙How is inequality in education calculated?Inequality in the education dimension is approximated only by inequality in years of schooling of the adult population. For simplicity, theestimate of inequality in education is based only on the distribution of years of schooling across the population, drawn from nationally representative household surveys.∙Would inclusion of expected years of schooling for children change the results?Expected years of schooling is an aggregate measure and inequality in its distribution would be reflected in current school enrolment ratios. Certainly, there is a difference in inequalities in the two distributions, with the distribution of expected years of schooling across the school-age population being lower. Thus, one can speculate that overall inequality in the HDI distribution would be reduced if expected years of schooling were used.∙Are the estimated inequalities in distribution of years of schooling for the adult population comparable across countriesgiven the differences in school systems?Years of schooling of adults is mostly derived from the highest level of schooling ac hieved. Using UNESCO’s country information on the duration of schooling needed for each level, the highest level of schooling is converted into years. While the duration of primary, secondary and most of post-secondary education is more or less standardized, the very high levels—masters and doctoral studies—vary across countries. However, the Atkinson measure of inequality which is used to assess inequality in HDI education components is less sensitive to differences at the upper end of a distribution.。
棱镜全息干涉法制作光子晶体的研究
棱镜全息干涉法制作光子晶体的研究刘国彬;孙晓红;李大海;臧克宽【摘要】Making photonic crystal by holography interference,which possessed the economic and quick properties,is an ideal method of fabricating large area photonic crystal. Prism holography interference method,as the setup is relatively stable,and can be easily adjusted,has become an important photonic crystal fabricating method. Because two dimension photonic crystal can be made relatively easily,it has been widely used in many devices such as LED and semiconductor lasers. In this paper, some theoretical questions in making two dimension and three dimension photonic crystal with top-cut prisms are analyzed,in which the number of light beams,space distributions and polarization have impact on the fabricated optical lattice structures. In this paper, different kinds of two dimension and three dimension optical lattice structures fabricated by hexagonal top-cut prisms are simulated with computer.%全息干涉法制作光子晶体具有经济、快捷等特点,是制作大面积的光子晶体理想的方法.其中棱镜全息干涉法具有实验装置稳定性好、易于调节等优点,成为一种重要的光子晶体制作方法.制作二维光子晶体相对简单,并且在很多器件如LED、半导体激光器等的制作中得到广泛的应用.本文分析了用top-cut六棱镜全息干涉制作二维、三维光子晶体的理论问题:光束的数目、空间分布等对生成的光学晶格结构的影响.本文计算机模拟一定情况下,六棱镜生成的各种二维、三维学晶格结构.【期刊名称】《激光与红外》【年(卷),期】2011(041)012【总页数】5页(P1373-1377)【关键词】光学材料;光子晶体;全息干涉;光学晶格【作者】刘国彬;孙晓红;李大海;臧克宽【作者单位】郑州大学河南省激光与光电信息技术重点实验室,河南郑州450052;郑州大学河南省激光与光电信息技术重点实验室,河南郑州450052;郑州大学河南省激光与光电信息技术重点实验室,河南郑州450052;郑州大学河南省激光与光电信息技术重点实验室,河南郑州450052【正文语种】中文【中图分类】O438.11 引言光子晶体是一种介电常数呈周期性分布的光学材料,1987年,由 Yablonovitch 和 John在研究禁止自发辐射和局域态问题时分别提出。
基于改进人工蜂群算法的配电网重构方法
基于改进人工蜂群算法的配电网重构方法①赵永生1, 赵爱华21(国家电网 国网安徽省电力有限公司, 合肥 230061)2(国家电网 国网安徽省电力有限公司 电力科学研究院, 合肥 230601)通讯作者: 赵永生摘 要: 为提高配电网运行的经济性和供电的可靠性, 本文选取系统平均停电频率和系统平均停电持续时间两个指标来表征配电网的供电可靠性, 并同时考虑有功网损的因素, 建立了计及供电可靠性指标的配电网多目标重构模型. 本文将量子理论和Metropolis 准则引入到人工蜂群算法中, 并通过模糊满意度决策方法来确定多目标重构模型的最优解, 提出了基于改进人工蜂群算法的配电网多目标重构模型优化方法. 建立配电网重构实例仿真系统, 通过与其它智能方法的重构对比分析证明了本文重构模型及求解方法的可行性和优越性.关键词: 配电网重构; 可靠性; 人工蜂群算法; 量子理论引用格式: 赵永生,赵爱华.基于改进人工蜂群算法的配电网重构方法.计算机系统应用,2020,29(10):211–216. /1003-3254/7636.htmlReconstruction Method of Distribution Network Based on Improved Artificial Colony AlgorithmZHAO Yong-Sheng 1, ZHAO Ai-Hua 21(State Grid Anhui Electric Power Co. Ltd., State Grid, Hefei 230061, China)2(Electric Power Research Institute, State Grid Anhui Electric Power Co. Ltd., State Grid, Hefei 230601, China)Abstract : In order to improve the economy of distribution network operation and the reliability of power supply, the system average outage frequency and the system average outage duration are selected to represent the power supply reliability of the distribution network in this study, and the active power loss factor is considered at the same time, a multi-objective reconstruction model of distribution network is established, which takes the power supply reliability index into account. This study introduces quantum theory and Metropolis criterion into artificial swarm algorithm, and the optimal solution of multi-objective reconstruction model is determined by fuzzy satisfaction decision method, a multi-objective reconstruction model optimization method for distribution network based on improved artificial swarm algorithm is proposed. The distribution network reconstruction example simulation system established, and the feasibility and superiority of the reconstruction model and solution method are verified by comparison with other intelligent methods.Key words : distribution network reconstruction; reliability; artificial swarm algorithm; quantum theory配电网网络重构是配电管理系统的重要研究内容,也是配电网优化运行和控制的有效手段, 它通过开关状态的变化来形成满足优化目标的新网络结构, 使网络负荷分布更加合理[1]. 传统网络重构多以有功网损最小为目标, 但随着人们对供电可靠性的要求越来越高,配电网的运行需要考虑更多的可靠性因素, 而配电网重构是一个多目标、多约束的复杂非线性规划问题[2],在配电网重构加入可靠性指标时会进一步加大重构模计算机系统应用 ISSN 1003-3254, CODEN CSAOBNE-mail: Computer Systems & Applications,2020,29(10):211−216 [doi: 10.15888/ki.csa.007636] ©中国科学院软件研究所版权所有.Tel: +86-10-62661041① 基金项目: 国家重点计划研发计划(2016YFB0901100)Foundation item: National Key Research and Development Program of China (2016YFB0901100)收稿时间: 2020-03-06; 修改时间: 2020-04-10; 采用时间: 2020-04-21; csa 在线出版时间: 2020-09-30211型求解的复杂度, 为提高配电网运行的可靠性和经济性, 需对计及供电可靠性指标的配电网重构进行系统的深入研究.网络重构的传统方法有分支定界法和单纯型法等,传统数学优化法虽然可以得到不依赖于配电网初始结构的全局最优解, 但其只适用于简单结构的配电网, 对于结构较复杂的配电网计算时间过长、效率低下[3]. 近年来, 随着人工智能算法的广泛应用[4], 相关学者将人工智能方法应用到配电网重构中, 取得了不错的效果.文献[5]采用遗传算法来求解配电网重构模型, 遗传算法虽然具有良好的全局并行处理能力, 搜索效率较高,但其局部搜索能力较差, 易陷入局部最优解的缺陷. 文献[6]将粒子群算法应用于配电网重构模型的求解, 粒子群算法具有优良的局部搜索能力, 但其全局搜索能力较差, 算法收敛困难. 以上文献的研究均是以单一的配电网有功损耗最小为重构的目标, 未考虑供电可靠性的因素. 文献[7]虽然建立了包括可靠性指标的配电网重构模型, 但其对于多目标的权重处理采用的是层次分析法, 该方法主观性过强.本文建立了计及供电可靠性指标的配电网多目标优化重构模型, 提出一种改进的人工蜂群算法, 并将其应用于配电网重构模型的求解, 通过建立的配电网实例的重构计算分析对本文重构方法的可行性和优越性进行了验证.1 计及可靠性指标的配电网重构模型配电网规划时通常采用闭环的形式, 而实际运行则采用开环形式, 节点间存在分段开关, 整个网络有一定的联络开关. 配电网重构是在满足一定约束的条件下, 通过对开关状态的调整来达到减小损耗和提高供电可靠性等目标[8].1.1 目标函数配电网重构一般以系统有功网损最小为目标, 未考虑配电网的供电可靠性, 本文以配电网有功网损最小和供电可靠性指标最优为共同目标.配电网有功网损计算表达式为:P L=L∑i=1k i R iP2i+Q2iU2i(1)其中, L表示配电网支路总数, k i为支路i的开关状态变量, 1为闭合、0为断开, P i、Q i、R i、U i为支路i的有功、无功、电阻及节点电压大小.配电网可靠性指标包括系统可靠性指标和负荷点可靠性指标[9], 负荷点可靠性指标表示的是网络结构对某一负荷点可靠性的影响, 而系统可靠性指标则能全面地反映在某种结构下的可靠性优劣. 因此, 对配电网供电可靠性进行评估时选取系统可靠性指标能更好地反映网络的可靠性. SAIFI(系统平均停电频率)和CAIFI(用户平均断电频率), SAIDI(系统平均停电持续时间)和CAIDI(用户平均停电持续时间), ASAI(平均供电可用率)和AENS(系统供电量不足)这3对指标描述的均是同一问题的两个方面, 且3对指标是相互关联的, 任意两类指标均可反映出其它两类, 对系统可靠性指标进行综合考虑后, 本文选取SAIFI和SAIDI作为重构模型中考虑的网络可靠性因素, 计算公式为:S AIFI=用户总停电次数总用户数=∑λi N i∑N i(2)S AIDI=用户停电持续时间总和总用户数=∑T i N i∑N i(3)其中, λi表示负荷点i的故障率, N i表示用户数, T i表示年平均停电时间.本文配电网重构模型目标函数表达式如下所示: F=(f1,f2,f3)=min(P L,S AIFI,S AIDI)(4) 1.2 约束条件配电网重构的数学模型还需满足一定的等式约束条件和不等式约束条件.等式约束条件指配电网系统的潮流需满足功率平衡的等式约束.P G i−U i∑j∈iU j(G i j cosδi j+B i j sinδi j)−P Li=0(5) Q G i−U i∑j∈iU j(G i j sinδi j−B i j cosδi j)−Q Li=0(6)其中, P Gi、P Li、Q Gi、Q Li为节点i电源和负荷的有功功率、无功功率, U i、U j为电压幅值大小; δij、G ij、B ij为相角、电导和电纳值.不等式约束包括节点电压约束、支路电流约束和网络拓扑约束.U mini≤U i≤U maxi(7)计算机系统应用2020 年 第 29 卷 第 10 期212I l ≤I max l (8)g k ∈G k(9)U min i U max i I max l 其中, 、表示节点i 电压幅值的最小值和最大值, 为支路l 载流量最大值, g k 为开关状态组合,G k 表示形成放射状网络的开关位置所有组合的集合.2 配电网重构模型优化算法2.1 改进后的人工蜂群算法人工蜂群算法是由Karaboga 于2005年提出的一种模拟自然界蜜蜂采蜜过程的群体智能算法[10]. 人工蜂群包括雇佣蜂、观察蜂和侦察蜂, 雇佣蜂负责搜索食物源及其邻域, 并将食物源的相关信息传递给观察蜂, 观察蜂再综合相关信息决定一个食物源目标进行搜索, 若经过规定的搜索次数后, 雇佣蜂未寻到更优的食物源, 则雇佣峰转变成侦察蜂, 并随机寻找新的食物源, 食物源的位置集为优化问题的解集, 食物源对应的花蜜量则为解的适应度. 人工蜂群算法具有鲁棒性强、通用性好和收敛效率快等优良特性, 但其仍存在着局部开采能力差和易陷入“早熟”的缺陷[11]. 因此, 为进一步提高人工蜂群算法的优化求解能力, 本文对其进行相应的改进, 本文将量子理论引入到算法中, 并在迭代的过程中将Metropolis 准则来取舍最优解.蜜蜂的位置由一串量子位表示, 第k 只蜜蜂的量子位置为:v k =[αk 1αk 2···αkl βk 1βk 2···βkl](10)α2+β2=1其中, α、β满足.蜂群的进化主要由蜜蜂的位置更新来实现, 第k 只蜜蜂位置的第i 个量子位v ki 为:v t +1ki =abs(U (θt +1ki )v tki )(11)U (θt +1ki )= cos θt +1ki −sin θt +1ki sin θt +1kicos θt +1ki(12)U (θt +1ki )θt +1ki其中, 、分别为量子旋转门及量子旋转角.θt +1ki=0若, 则v ki 通过非门进行更新:v t +1ki =Nv t ki =[0110]v tki(13)量子人工蜂群主要由工蜂和观察蜂组成, 蜜蜂位置对应的食物源为一个由0或1数字串, 食物源k 的x k =(x k 1,x k 2,···,x kl )位置为: .x t +1kd = 1,ηt +1kd >(a t +1kd )20,ηt +1kd ≤(a t +1kd)2(14)ηt +1kd (a t +1kd )2x t +1kd 式中, 为[0, 1]之间随机数, 表示量子位为0的概率.设蜜蜂的局部最优位置为: p k =(p k 1, p k 2, ···, p kl ), 蜂群当前的全局最优位置为: p g =(p g 1, p g 2, ···, p gl ).工蜂位置更新表达式为:θt +1id =e 1(p t id −x t id )+e 2(p t gd −x tid )(15)v t +1id = Nv t id ,若θt +1id =0且γt +1id <c 1abs[U (θt +1id )v t id],其他(16)γt +1idv t +1id 其中, e 1、e 2表示影响因子, 表示[0, 1]区间的随机数, 为工蜂i 在第t +1次循环中的第d 个量子位;c 1表示[0, 1/l ]区间的常数.观察蜂量子位的进化表达式为:θt +1jd =e 3(p t id −x t jd )+e 4(p t jd −x t jd )+e 5(p t gd −x tjd )(17)v t +1jd = Nv t jd ,若θt +1jd =0且γt +1jd <c 2abs[U (θt +1jd )v t jd],其他(18)p t id v t +1jd其中, 表示工蜂i 局部最优位置的第d 个量子位, e 3、e 4、e 5为3个影响因子; 为观察蜂j 在第t +1次循环中的第d 个量子位.在求解过程中, 采用Metropolis 准则来取舍获得的最优解[12]. 如果获得的最优解更优, 则接受它; 否则则根据下式进行判断是否接受.Q i +1= 1f it (x i +1)<f it (x i )min[1,1−Z ]>K f it (x i +1)≥f it (x i )(19)Z =exp((f it (x i +1)−f it (x i ))/f it (x i ))(20)x i +1=α×x i(21)其中, K 为[0, 1]区间的判定阈值, Q (x i +1)表示在状态x i +1下的接受概率, α表示温度冷却系数.2.2 改进蜂群算法性能检验为验证本文改进后人工蜂群算法的寻优性能, 采用Shubert 函数进行相应的测试, 并与遗传、粒子群和改进前的人工蜂群算法进行相应的对比分析. Shubert 函数得表达式为:2020 年 第 29 卷 第 10 期计算机系统应用213f(x,y)=5∑i=1i cos[(i+1)x+i]5∑i=1i cos[(i+1)y+i]+0.5[(x+1.42513)2+(y+0.80032)2](22)其中, −10 ≤ x, y ≤ 10, 该函数的局部最小值有760个,全局最优解只有1个(−1.425 13, −0.800 32), 函数值为−186.730 90. 四种优化算法的种群个体数均为100, 最大迭代步数为10 000, 分别进行20次寻优试验, 结果如表1所示.表1 函数寻优结果分析评价参数遗传算法粒子群法改进前蜂群法本文改进后蜂群法结果平均值−181.01−182.67−182.85−186.52结果最小值−184.66−185.77−185.83−186.69结果最大值−178.39−180.05−180.16−186.02平均收敛时间(s)46.3351.2950.1836.03由表1可知, 本文改进后的人工蜂群算法具有更加优异的寻优性能, 优化结果的平均值和平均收敛时间均是四种算法中最小的, 本文采用量子位的概率幅对蜂群算法中食物源进行编码, 通过量子旋转门相位的旋转实现蜂群算法搜索过程中的食物源更新, 扩展了对解空间的遍历性, 采用量子非门来实现搜索过程的变异操作, 可增加种群的多样性, 可有效扩展全局最优解的数量, 提高获得全局最优解的概率, 另外, 本文采用的Metropolis迭代终止准则, 可概率性地跳出当前局部最优解的陷阱, 使得算法全局性更强, 迭代终止阈值的设置则主要根据精度和时间要求进行设定, 精度要求越高, 则阈值越小, 时间要求越快, 则阈值需设定的越大. 为验证不同个体数量对算法性能的影响, 在不同个体数量下进行分别20次寻优, 结果如图1所示.由图1可知, 随着个体数量的增加, 优化结果随之变优, 当个体数量达到一定程度时, 结果趋于平稳, 但优化时间却明显增加, 因此在设置改进人工蜂群算法个体数量时, 需根据需优化变量个数、结果精度和时间要求进行综合评估.2.3 配电网重构模型的优化求解本文配电网重构模型是一个多目标优化问题, 多目标优化问题不存在最优解, 而是由许多非劣解构成的Pareto最优解集[13]. 本文通过模糊满意度决策方法来确定最终解, 定义模糊隶属度函数如下:µj=1,f j≤f min jf maxj−f jf maxj−f minj,f minj≤f j≤f maxj0,f j≥f maxj(23)f minjf maxj式中, f j、、分别表示第j个目标函数值及其对应的最小值和最大值.种群个体数量优化结果平均值050100150200250−187.0−186.5−186.0−185.5−185.0−184.5−184.0种群个体数量收敛平均时间(s)0501001502002503035404550556065(a) 优化结果平均值(b) 优化平均时间图1 不同个体数量下优化结果再由下式求取每个目标函数的综合隶属度U, 然后选择U最大的来作为重构模型的折衷解.U=J∑j=1µj(24)假设本文多目标优化问题的Pareto最优解集为[x i, y i, z i], i=1, 2, ···, N, N为解集的组数, [x1, x2,···, x N]为对应子目标函数f1的解集, 其最小值为x min, [y1, y2, ···, y N]为对应子目标函数f2的解集, 其最小值为y min, [z1, z2, ···, z N]为对应子目标函数f3的解集, 其最小值为z min, 多目标优化问题的理想最优解为[x min, y min, z min]. 综合隶属度U表示解集中任一组解与理想最优解的接近程度, U越大, 则越接近, U ≤ 3, 当U为计算机系统应用2020 年 第 29 卷 第 10 期2143时, 表示[x i , y i , z i ]为理想最优解.本文利用改进后的人工蜂群算法来对配电网重构多目标问题进行求解, 求解流程图如图2所示.是否是否开始算法参数初始化 (种群数目、最大迭代次数、控制变量个数等)人工蜂群种群个体初始化计算工蜂和侦察蜂的食物源密度对工蜂和侦察蜂量子位置进行更新满足约束条件?形成新的网络拓扑结构计算各目标适应度值按 Metropolis 准则对最优解进行选择取舍达到算法终止条件根据模糊满意度决策方法选择最优折衷解结束图2 本文求解基本流程图3 配电网重构实例分析3.1 系统实例分析本文以IEEE33节点配电系统为例进行配电网重构分析, 系统结构图如图3所示, 共包含33个节点和37条支路, 分段开关共32个, 联络开关共5个, 配电网支路参数与节点负荷参数见文献[14], 系统基准电压为12.66 kV, 基准功率值为10 MVA, 总负荷为3715 kW+j2300 kvar.12345678919联络开关分段开关2021221011121314151617183332313029282726232425图3 IEEE-33节点系统图3.2 配电网重构结果对比分析根据本文建立的计及可靠性指标的配电网多目标重构模型, 将本文改进后的蜂群算法应用于模型的求解, 并与遗传、粒子群和改进前的蜂群3种智能算法进行对比分析, 其中配电网可靠性的计算采用故障模式与后果分析法(FEMA), 该方法具有概念清晰、结果准确的优点[15], 在配电网供电可靠性计算中获得广泛应用. 四种不同智能方法优化获得的Pareto 前沿如图4所示.网络有功损耗(k W )遗传算法粒子群法改进前蜂群法本文改进后蜂群法SAIDI (h/户年)SAIFI (次/户年)160155150145140135864468102图4 不同方法获得的Pareto 最优解前沿由图4可知, 遗传算法、粒子群法、改进前蜂群法和本文改进后蜂群法计算后获得的Pareto 最优解个数分别为7、7、9、15, 表明本文改进后蜂群法获得在配电网多目标重构模型求解中获得的Pareto 最优解集分布更为完整, 本文算法在寻找最优解的性能上优于其它3种算法.通过本文2.2小节的模糊满意度决策方法确定4种智能优化方法的折衷最优解如表2所示, 各优化方法优化过程对应的收敛曲线图如图5所示.根据表2和图5的结果可知, 本文改进后的人工蜂群算法在求解计及可靠性指标的多目标重构模型中拥有更好的优化效果与收敛速度, 优化后选择的Pareto 最优解是最优异的: 配电网损耗最小(139.63 kW), 系统平均停电频率SAIFI (5.89次/户年)和系统平均停电持续时间SAIDI (4.35 h/户年)也均是最小的, 且本文方法收敛特性良好, 收敛时间较短(5.69 s). 若在配电网重构时只考虑配电网有功损耗, 采用本文改进后的人工蜂群算法优化得到的配电网损耗虽然更小(137.86 kW),但重构后的配电网可靠性却大幅下降(SAIFI 为8.03次/户年, SAIDI 为5.12 h/户年), 严重影响了配电网的安全可靠运行. 因此, 本文计及供电可靠性指标的配电网多目标重构模型能更好地兼顾经济性与可靠性, 更适合配电网的实际运行状况.2020 年 第 29 卷 第 10 期计算机系统应用215表2 重构模型优化结果分析评价参数优化前遗传算法粒子群法改进前蜂群法本文改进后蜂群法SAIFI (次/户年)19.707.56 6.21 6.05 5.89SAIDI (h/户年)6.586.195.034.89 4.35有功损耗(kW)202.66146.89142.29142.35139.63优化时间(s)0 5.93 6.696.835.69配电网损耗 (k W )寻优时间 (s)遗传算法粒子群法改进前蜂群法本文改进后蜂群法210200190180170160150140130246810图5 优化过程收敛曲线图4 结论与展望本文建立了计及供电可靠性指标的配电网多目标重构模型, 多目标的处理采用模糊满意度的决策方法,并提出了基于改进人工蜂群算法的配电网多目标重构模型优化方法, 通过建立的IEEE33节点配电网实例仿真系统的重构对比分析, 结果表明本文改进后的人工蜂群算法在解决计及可靠性指标的配电网多目标重构中具有更加优越的性能, 获得的Pareto 最优解集分布更为完整, 选择的Pareto 最优解是最优的: 配电网损耗最小、系统平均停电频率和系统平均停电持续时间也均是最小的, 且本文方法收敛特性良好, 收敛时间较短.本文计及供电可靠性指标的配电网多目标重构模型能更好地兼顾经济性与可靠性, 在配电网的运行中具有更好的适用性.参考文献杨凯峰, 王击. 配电网络重构算法综述. 南方电网技术,2013, 7(4): 103–107. [doi: 10.3969/j.issn.1674-0629.2013.104.022]Shu DS, Huang ZX, Li JY, et al . Application of multi-agentparticle swarm algorithm in distribution networkreconfiguration. Chinese Journal of Electronics, 2016, 25(6):1179–1185. [doi: 10.1049/cje.2016.10.015]2董思兵. 基于免疫二进制粒子群算法的配电网重构[硕士学位论文]. 济南: 山东大学, 2008.3Zhang FY, Dong YQ, Zhang KQ. A novel combined modelbased on an artificial intelligence algorithm-a case study on wind speed forecasting in Penglai, China. Sustainability,2016, 8(6): 555. [doi: 10.3390/su8060555]4胡雯, 孙云莲, 张巍. 基于改进的自适应遗传算法的智能配电网重构研究. 电力系统保护与控制, 2013, 41(23): 85–90.[doi: 10.7667/j.issn.1674-3415.2013.23.014]5马草原, 孙展展, 葛森, 等. 改进二进制粒子群算法在配电网重构中的应用. 电测与仪表, 2016, 53(7): 84–88, 94. [doi:10.3969/j.issn.1001-1390.2016.07.015]6何禹清, 刘定国, 曾超, 等. 计及可靠性的配电网重构模型及其分阶段算法. 电力系统自动化, 2011, 35(17): 56–60.7文娟, 谭阳红, 雷可君. 基于量子粒子群算法多目标优化的配电网动态重构. 电力系统保护与控制, 2015, 43(16): 73–78. [doi: 10.7667/j.issn.1674-3415.2015.16.011]8董雷, 何林, 蒲天骄. 中性点接地方式对配电网可靠性的影响. 电力系统保护与控制, 2013, 41(1): 96–101. [doi: 10.7667/j.issn.1674-3415.2013.01.015]9Bansal JC, Sharma H, Arya KV, et al . Memetic search inartificial bee colony algorithm. Soft Computing, 2013, 17(10):1911–1928. [doi: 10.1007/s00500-013-1032-8]10刘三阳, 张平, 朱明敏. 基于局部搜索的人工蜂群算法. 控制与决策, 2014, 29(1): 123–128.11Norouzi N, Sadegh-Amalnick M, Tavakkoli-Moghaddam R.A time-dependent vehicle routing problem solved by improved simulated annealing. Proceedings of the Romanian Academy, 2015, 16(3): 458–465.12戚建文, 任建文, 翟俊义. 基于改进多目标引力搜索算法的电力系统环境经济调度. 电测与仪表, 2016, 53(18): 80–86.[doi: 10.3969/j.issn.1001-1390.2016.18.015]13Su HS, Yang J. Capacitors optimization placement indistribution systems based on improved seeker optimization algorithm. Sensors & Transducers, 2013, 155(8): 180–187.14王乐. 厦门配电网可靠性提升措施的研究与应用[硕士学位论文]. 广州: 华南理工大学, 2014.15计算机系统应用2020 年 第 29 卷 第 10 期216。
FLUENT中组分输运及化学反应燃烧模拟
混合分数定义
混合分数, f, 写成元素的质量分数形式:
f Zk Zk,O Zk,F Zk,O
其处中的,值。Zk 是元素k的质量分数 ;下标 F 和O 表示燃料和氧化剂进口流
对于简单的 fuel/oxidizer系统, 混合物分数代表计算控制体里的燃料 质量分数.
平衡化学的 PDF模型 层流火焰面模型
进展变量模型
Zimont 模型
有限速率模型
用总包机理反应描述化学反应过程. 求解化学组分输运方程.
求解当地时间平均的各个组分的质量分数, mj.
组分 j的源项 (产生或消耗)是机理中所有k个反应的净反应速率 :
Rj Rjk k
R、jk混(第合k或个涡化旋学破反碎应(生E成BU或)消速耗率的的j 组小分值)。是.根据 Arrhenius速率公式
p(f) can be used to compute time-averaged values of variables that
depend on the mixture fraction, f:
i
1 0
p
(
f
)
i( f )d f
Species mole fractions
Temperature, density
的燃烧过程。.
计算连续相流动场 计算颗粒轨道
更新连续相源项
颗粒弥散: 随机轨道模型
Monte-Carlo方法模拟湍流颗粒弥散 (discrete random walks)
颗粒运动计算中考虑气体的平均速度及随机湍流脉 动速度的影响。
每个轨道包含了一群具有相同特性的颗粒,如相同 的初始直径,密度等.
贝叶斯矢量自回归模型与随机波动率和时变参数的分析软件说明书
Package‘bvarsv’October12,2022Type PackageTitle Bayesian Analysis of a Vector Autoregressive Model withStochastic V olatility and Time-Varying ParametersVersion1.1Date2015-10-29Author Fabian KruegerMaintainer Fabian Krueger<**************************>DescriptionR/C++implementation of the model proposed by Primiceri(``Time Varying Structural Vec-tor Autoregressions and Monetary Policy'',Review of Economic Studies,2005),with functional-ity for computing posterior predictive distributions and impulse responses.License GPL(>=2)Imports Rcpp(>=0.11.0)LinkingTo Rcpp,RcppArmadilloURL https:///site/fk83research/codeNeedsCompilation yesRepository CRANDate/Publication2015-11-2514:40:22R topics documented:bvarsv-package (2)p (3)Example data sets (5)helpers (6)impulse.responses (8)p (9)Index1212bvarsv-packagebvarsv-package Bayesian Analysis of a Vector Autoregressive Model with StochasticVolatility and Time-Varying ParametersDescriptionR/C++implementation of the Primiceri(2005)model,which allows for both stochastic volatility and time-varying regression parameters.The package contains functions for computing posterior predictive distributions and impulse responses from the model,based on an input data set.DetailsPackage:bvarsvType:PackageVersion: 1.0Date:2014-08-14License:GPL(>=2)URL:https:///site/fk83research/codeAuthor(s)Fabian Krueger<**************************>,based on Matlab code by Dimitris Korobilis(see Koop and Korobilis,2010).ReferencesThe code incorporates the recent corrigendum by Del Negro and Primiceri(2015),which points to an error in the original MCMC algorithm of Primiceri(2005).Del Negro,M.and Primicerio,G.E.(2015).‘Time Varying Structural Vector Autoregressions and Monetary Policy:A Corrigendum’,Review of Economic Studies82,1342-1345.Koop,G.and D.Korobilis(2010):‘Bayesian Multivariate Time Series Methods for Empirical Macroeconomics’,Foundations and Trends in Econometrics3,267-358.Accompanying Matlab code available at https:///site/dimitriskorobilis/matlab.Primiceri,G.E.(2005):‘Time Varying Structural Vector Autoregressions and Monetary Policy’, Review of Economic Studies72,821-852.Examples##Not run:#Load US macro datadata(usmacro)#Estimate trivariate model using Primiceri s prior choices(default settings)set.seed(5813)bv<p(usmacro)##End(Not run)p Bayesian Analysis of a Vector Autoregressive Model with StochasticVolatility and Time-Varying ParametersDescriptionBayesian estimation of theflexible V AR model by Primiceri(2005)which allows for both stochasticvolatility and time drift in the model parameters.Usagep(Y,p=1,tau=40,nf=10,pdrift=TRUE,nrep=50000,nburn=5000,thinfac=10,itprint=10000,save.parameters=TRUE,k_B=4,k_A=4,k_sig=1,k_Q=0.01,k_S=0.1,k_W=0.01,pQ=NULL,pW=NULL,pS=NULL)ArgumentsY Matrix of data,where rows represent time and columns are different variables.Y must have at least two columns.p Lag length,greater or equal than1(the default)tau Length of the training sample used for determining prior parameters via leastsquares(LS).That is,data in Y[1:tau,]are used for estimating prior parame-ters via LS;formal Bayesian analysis is then performed for data in Y[(tau+1):nrow(Y),].nf Number of future time periods for which forecasts are computed(integer,1orgreater,defaults to10).pdrift Dummy,indicates whether or not to account for parameter drift when simulatingforecasts(defaults to TRUE).nrep Number of MCMC draws excluding burn-in(defaults to50000)nburn Number of MCMC draws used to initialize the sampler(defaults to5000).Thesedraws do not enter the computation of posterior moments,forecasts etc.thinfac Thinning factor for MCMC output.Defaults to10,which means that the fore-cast sequences(fc.mdraws,fc.vdraws,fc.ydraws,see below)contain onlyevery tenth draw of the original sequence.Set thinfac to one to obtain the fullMCMC sequence.itprint Print every itprint-th iteration.Defaults to10000.Set to very large value toomit printing altogether.save.parametersIf set to TRUE,parameter draws are saved in lists(these can be very large).De-faults to TRUE.k_B,k_A,k_sig,k_Q,k_W,k_S,pQ,pW,pSQuantities which enter the prior distributions,see the links below for details.Defaults to the exact values used in the original article by Primiceri.ValueBeta.postmean Posterior means of coefficients.This is an array of dimension[M,Mp+1,T],where T denotes the number of time periods(=number of rows of Y),and Mdenotes the number of system variables(=number of columns of Y).The sub-matrix[,,t]represents the coefficient matrix at time t.The intercept vector isstacked in thefirst column;the p coefficient matrices of dimension[M,M]areplaced next to it.H.postmean Posterior means of error term covariance matrices.This is an array of dimension[M,M,T].The submatrix[,,t]represents the covariance matrix at time t.Q.postmean,S.postmean,W.postmeanPosterior means of various covariance matrices.fc.mdraws Draws for the forecast mean vector at various horizons(three-dimensional array,where thefirst dimension corresponds to system variables,the second to forecasthorizons,and the third to MCMC draws).Note:The third dimension will beequal to nrep/thinfac,apart from possible rounding issues.fc.vdraws Draws for the forecast covariance matrix.Design similar to fc.mdraws,ex-cept that thefirst array dimension contains the lower-diagonal elements of theforecast covariance matrix.fc.ydraws Simulated future observations.Design analogous to fc.mdraws.Beta.draws,H.drawsMatrices of parameter draws,can be used for computing impulse responses lateron(see impulse.responses),and accessed via the helper function parameter.draws.These outputs are generated only if save.parameters has been set to TRUE.Author(s)Fabian Krueger,based on Matlab code by Dimitris Korobilis(see Koop and Korobilis,2010).In-corporates the corrigendum by Del Negro and Primiceri(2015),which points to an error in the original MCMC algorithm of Primiceri(2005).ReferencesDel Negro,M.and Primicerio,G.E.(2015).‘Time Varying Structural Vector Autoregressions and Monetary Policy:A Corrigendum’,Review of Economic Studies82,1342-1345.Koop,G.and D.Korobilis(2010):‘Bayesian Multivariate Time Series Methods for Empirical Macroeconomics’,Foundations and Trends in Econometrics3,267-358.Accompanying Matlab code available at https:///site/dimitriskorobilis/matlab.Primiceri,G.E.(2005):‘Time Varying Structural Vector Autoregressions and Monetary Policy’, Review of Economic Studies72,821-852.Example data sets5See AlsoThe helper functions predictive.density and predictive.draws provide simple access to the forecast distribution produced by p.Impulse responses can be computed using im-pulse.responses.For detailed examples and explanations,see the accompanying pdffile hosted at https:///site/fk83research/code.Examples##Not run:#Load US macro datadata(usmacro)#Estimate trivariate BVAR using default settingsset.seed(5813)bv<p(usmacro)##End(Not run)Example data sets US Macroeconomic Time SeriesDescriptionInflation rate,unemployment rate and treasury bill interest rate for the US,as used by Primiceri (2005).Whereas usmacro covers the time period studied by Primiceri(1953:Q1to2001:Q3), usmacro.update updates the data until2015:Q2.FormatMultiple time series(mts)object,series names:‘inf’,‘une’,and‘tbi’.SourceInflation data provided by Federal Reserve Bank of Philadelphia(2015):‘Real-Time Data Research Center’,https:///research-and-data/real-time-center/real-time-data/ data-files/p Accessed:2015-10-29.The inflation rate is the year-over-year log growth rate of the GDP price index.We use the2001:Q4vintage of the price index for usmacro,and the2015:Q3 vintage for usmacro.update.Unemployment and Treasury Bill:Federal Reserve Bank of St.Louis(2015):‘Federal Reserve Economic Data’,/fred2/.Accessed:2015-10-29.The two series have the identifiers‘UNRATE’and‘TB3MS’.For each quarter,we compute simple averages over three monthly observations.Disclaimer:Please note that the providers of the original data cannot take responsibility for the data posted here,nor can they answer any questions about ers should consult their respective websites for the official and most recent version of the data.ReferencesPrimiceri,G.E.(2005):‘Time Varying Structural Vector Autoregressions and Monetary Policy’, Review of Economic Studies72,821-852.Examples##Not run:#Load and plot datadata(usmacro)plot(usmacro)##End(Not run)helpers Helper Functions to Access BVAR Forecast Distributions and Param-eter DrawsDescriptionFunctions to extract a univariate posterior predictive distribution from a modelfit generated by p.Usagepredictive.density(fit,v=1,h=1,cdf=FALSE)predictive.draws(fit,v=1,h=1)parameter.draws(fit,type="lag1",row=1,col=1)Argumentsfit List,modelfit generated by pv Index for variable of interest.Must be in line with the specification of fit.h Index for forecast horizon of interest.Must be in line with the specification offit.cdf Set to TRUE to return cumulative distribution function,set to FALSE to return probability density functiontype Character string,used to specify output for function parameter.draws.Setting to"intercept"returns parameter draws for the intercept vector.Setting to oneof"lag1",...,"lagX",(where X is the lag order used in fit)returns parameterdraws from the autoregressive coefficient matrices.Setting to"vcv"returnsdraws for the elements of the residual variance-covariance matrix.row,col Row and column index for the parameter for which parameter.draws should return posterior draws.That is,the function returns the row,col element of thematrix specified by type.Note that col is irrelevant if type="intercept"hasbeen chosen.Valuepredictive.density returns a function f(z),which yields the value(s)of the predictive density at point(s)z.This function exploits conditional normality of the model,given the posterior draws of the parameters.predictive.draws returns a list containing vectors of MCMC draws,more specifically:y Draws from the predictand itselfm Mean of the normal distribution for the predictand in each drawv Variance of the normal distribution for the predictand in each drawBoth outputs should be closely in line with each other(apart from a small amount of sampling noise),see the link below for details.parameter.draws returns posterior draws for a single(scalar)parameter of the modelfitted by p.The output is a matrix,with rows representing MCMC draws,and columns repre-senting time.Author(s)Fabian KruegerSee AlsoFor examples and background,see the accompanying pdffile hosted at https://sites.google.com/site/fk83research/code.Examples##Not run:#Load US macro datadata(usmacro)#Estimate trivariate BVAR using default settingsset.seed(5813)bv<p(usmacro)#Construct predictive density function for the second variable(inflation),one period ahead f<-predictive.density(bv,v=2,h=1)#Plot the density for a grid of valuesgrid<-seq(-2,5,by=0.05)plot(x=grid,y=f(grid),type="l")#Cross-check:Extract MCMC sample for the same variable and horizonsmp<-predictive.draws(bv,v=2,h=1)#Add density estimate to plotlines(density(smp),col="green")##End(Not run)8impulse.responses impulse.responses Compute Impulse Response Function from a Fitted ModelDescriptionComputes impulse response functions(IRFs)from a modelfit produced by p.The IRF describes how a variable responds to a shock in another variable,in the periods following the shock.To enable simple handling,this function computes IRFs for only one pair of variables that must be specified in advance(see impulse.variable and response.variable below).Usageimpulse.responses(fit,impulse.variable=1,response.variable=2,t=NULL,nhor=20,scenario=2,draw.plot=TRUE) Argumentsfit Modelfit produced by p,with the option save.parameters set to TRUE.impulse.variableVariable which experiences the shock.response.variableVariable which(possibly)responds to the shock.t Time point from which parameter matrices are to be taken.Defaults to most recent time point.nhor Maximal time between impulse and response(defaults to20).scenario If1,there is no orthogonalizaton,and the shock size corresponds to one unit of the impulse variable.If scenario is either2(the default)or3,the errorterm variance-covariance matrix is orthogonalized via Cholesky decomposition.For scenario=2,the Cholesky decomposition of the error term VCV matrix attime point t is used.scenario=3is the variant used in Del Negro and Primiceri(2015).Here,the diagonal elements are set to their averages over time,whereasthe off-diagonal elements are specific to time t.See the notes below for furtherinformation.draw.plot If TRUE(the default):Produces a plot showing the5,25,50,75and95percent quantiles of the simulated impulse responses.ValueList of two elements:contemporaneousContemporaneous impulse responses(vector of simulation draws).irf Matrix of simulated impulse responses,where rows represent simulation draws, and columns represent the number of time periods after the shock(1infirstcolumn,nhor in last column).NoteIf scenario is set to either2or3,the Cholesky transform(transpose of chol)is used to produce the orthogonal impulse responses.See Hamilton(1994),Section11.4,and particularly Equation[11.4.22].As discussed by Hamilton,the ordering of the system variables matters,and should beconsidered carefully.The magnitude of the shock(impulse)corresponds to one standard deviation of the error term.If scenario=1,the function simply outputs the matrices of the model’s moving average represen-tation,see Equation[11.4.1]in Hamilton(1994).The scenario considered here may be unrealistic, in that an isolated shock may be unlikely.The magnitude of the shock(impulse)corresponds to one unit of the error term.Further supporting information is available at https:///site/FK83research/ code.Author(s)Fabian KruegerReferencesHamilton,J.D.(1994):Time Series Analysis,Princeton University Press.Del Negro,M.and Primicerio,G.E.(2015).‘Time Varying Structural Vector Autoregressions and Monetary Policy:A Corrigendum’,Review of Economic Studies82,1342-1345.Supplementary material available at /content/82/4/1342/suppl/DC1(ac-cessed:2015-11-17).Examples##Not run:data(usmacro)set.seed(5813)#Run BVAR;save parametersfit<p(usmacro,save.parameters=TRUE)#Impulse responsesimpulse.responses(fit)##End(Not run)p Simulate from a VAR(1)with Stochastic Volatility and Time-VaryingParametersDescriptionSimulate from a V AR(1)with Stochastic V olatility and Time-Varying ParametersUsagep(B0=NULL,A0=NULL,Sig0=NULL,Q=NULL,S=NULL,W=NULL,t=500,init=1000)ArgumentsB0Initial values of mean parameters:Matrix of dimension[M,M+1],where the first column holds the intercept vector and the other columns hold the matrixoffirst-order autoregressive coefficients.By default(NULL),B0corresponds toM=2uncorrelated zero-mean processes with moderate persistence(first-orderautocorrelation of0.6).A0Initial values for(transformed)error correlation parameters:Vector of length0.5∗M∗(M−1).Defaults to a vector of zeros.Sig0Initial values for log error term volatility parameters:Vector of length M.De-faults to a vector of zeros.Q,S,W Covariance matrices for the innovation terms in the time-varying parameters (B,A,Sig).The matrices are symmetric,with dimensions equal to the numberof elements in B,A and Sig,respectively.Default to diagonal matrices withvery small terms(1e-10)on the main diagonal.This corresponds to essentiallyno time variation in the parameters and error term matrix elements.t Number of time periods to simulate.init Number of draws to initialize simulation(to decrease the impact of starting val-ues).Valuedata Simulated data,with rows corresponding to time and columns corresponding to the M system variables.Beta Array of dimension[M,M+1,t].Submatrix[,,l]holds the parameter matrix for time period l.H Array of dimension[M,M,t].Submatrix[,,l]holds the error term covariancematrix for period l.NoteThe choice of‘reasonable’values for the elements of Q,S and W requires some care.If the elements of these matrices are too large,parameter variation can easily become excessive.Too large elements of Q can lead the parameter matrix B into regions which correspond to explosive processes.Too large elements in S and(especially)W may lead to excessive error term variances.Author(s)Fabian KruegerReferencesPrimiceri,G.E.(2005):‘Time Varying Structural Vector Autoregressions and Monetary Policy’, Review of Economic Studies72,821-852.p11See Alsop can be used tofit a model on data generated by p.This can be a useful way to analyze the performance of the estimation methods.Examples##Not run:#Generate data from a model with moderate time variation in the parameters#and error term variancesset.seed(5813)sim<p(Q=1e-5*diag(6),S=1e-5*diag(1),W=1e-5*diag(2))#Plot both seriesmatplot(sim$data,type="l")#Plot AR(1)parameters of both equationsmatplot(cbind(sim$Beta[1,2,],sim$Beta[2,3,]),type="l")##End(Not run)Index∗datasetsExample data sets,5∗forecasting methodsp,3p,9∗helpershelpers,6∗impulse response analysisimpulse.responses,8∗packagebvarsv-package,2p,3,5–8,11bvarsv(bvarsv-package),2bvarsv-package,2chol,9Example data sets,5helpers,6impulse.responses,4,5,8parameter.draws,4,6,7parameter.draws(helpers),6predictive.density,5,7predictive.density(helpers),6 predictive.draws,5,7predictive.draws(helpers),6p,9,11usmacro(Example data sets),512。
计量经济学英语词汇
计量经济学英语词汇计量经济学英语词汇Absolute deviation, 绝对离差Absolute number, 绝对数Absolute residuals, 绝对残差Acceleration array, 加速度立体阵Acceleration in an arbitrary direction, 任意方向上的加速度Acceleration normal, 法向加速度Acceleration space dimension, 加速度空间的维数Acceleration tangential, 切向加速度Acceleration vector, 加速度向量Acceptable hypothesis, 可接受假设Accumulation, 累积Accuracy, 准确度Actual frequency, 实际频数Adaptive estimator, 自适应估计量Addition, 相加Addition theorem, 加法定理Additive Noise, 加性噪声Additivity, 可加性Adjusted rate, 调整率Adjusted value, 校正值Admissible error, 容许误差Aggregation, 聚集性Alpha factoring,α因子法Alternative hypothesis, 备择假设Among groups, 组间Amounts, 总量Analysis of correlation, 相关分析Analysis of covariance, 协方差分析Analysis Of Effects, 效应分析Analysis Of Variance, 方差分析Analysis of regression, 回归分析Analysis of time series, 时间序列分析Analysis of variance, 方差分析Angular transformation, 角转换ANOVA (analysis of variance), 方差分析ANOVA Models, 方差分析模型ANOVA table and eta, 分组计算方差分析Arcing, 弧/弧旋Arcsine transformation, 反正弦变换Area 区域图Area under the curve, 曲线面积AREG , 评估从一个时间点到下一个时间点回归相关时的误差ARIMA, 季节和非季节性单变量模型的极大似然估计Arithmetic grid paper, 算术格纸Arithmetic mean, 算术平均数Arrhenius relation, 艾恩尼斯关系Assessing fit, 拟合的评估Associative laws, 结合律Asymmetric distribution, 非对称分布Asymptotic bias, 渐近偏倚Asymptotic efficiency, 渐近效率Asymptotic variance, 渐近方差Attributable risk, 归因危险度Attribute data, 属性资料Attribution, 属性Autocorrelation, 自相关Autocorrelation of residuals, 残差的自相关Average, 平均数Average confidence interval length, 平均置信区间长度Average growth rate, 平均增长率Bar chart, 条形图Bar graph, 条形图Base period, 基期Bayes' theorem , Bayes定理Bell-shaped curve, 钟形曲线Bernoulli distribution, 伯努力分布Best-trim estimator, 最好切尾估计量Bias, 偏性Binary logistic regression, 二元逻辑斯蒂回归Binomial distribution, 二项分布Bisquare, 双平方Bivariate Correlate, 二变量相关Bivariate normal distribution,双变量正态分布Bivariate normal population, 双变量正态总体Biweight interval, 双权区间Biweight M-estimator, 双权M估计量Block, 区组/配伍组BMDP(Biomedical computer programs), BMDP统计软件包Boxplots, 箱线图/箱尾图Breakdown bound, 崩溃界/崩溃点Canonical correlation, 典型相关Caption, 纵标目Case-control study, 病例对照研究Categorical variable, 分类变量Catenary, 悬链线Cauchy distribution, 柯西分布Cause-and-effect relationship, 因果关系Cell, 单元Censoring, 终检Center of symmetry, 对称中心Centering and scaling, 中心化和定标Central tendency, 集中趋势Central value, 中心值CHAID -χ2 Automatic Interaction Detector, 卡方自动交互检测Chance, 机遇Chance error, 随机误差Chance variable, 随机变量Characteristic equation, 特征方程Characteristic root, 特征根Characteristic vector, 特征向量Chebshev criterion of fit, 拟合的切比雪夫准则Chernoff faces, 切尔诺夫脸谱图Chi-square test, 卡方检验/χ2检验Choleskey decomposition, 乔洛斯基分解Circle chart, 圆图Class interval, 组距Class mid-value, 组中值Class upper limit, 组上限Classified variable, 分类变量Cluster analysis, 聚类分析Cluster sampling, 整群抽样Code, 代码Coded data, 编码数据Coding, 编码Coefficient of contingency, 列联系数Coefficient of determination, 决定系数Coefficient of multiple correlation, 多重相关系数Coefficient of partial correlation, 偏相关系数Coefficient of production-moment correlation, 积差相关系数Coefficient of rank correlation, 等级相关系数Coefficient of regression, 回归系数Coefficient of skewness, 偏度系数Coefficient of variation, 变异系数Cohort study, 队列研究Collinearity, 共线性Column, 列Column effect, 列效应Column factor, 列因素Combination pool, 合并Combinative table, 组合表Common factor, 共性因子Common regression coefficient, 公共回归系数Common value, 共同值Common variance, 公共方差Common variation, 公共变异Communality variance, 共性方差Comparability, 可比性Comparison of bathes, 批比较Comparison value, 比较值Compartment model, 分部模型Compassion, 伸缩Complement of an event, 补事件Complete association, 完全正相关Complete dissociation, 完全不相关Complete statistics, 完备统计量Completely randomized design, 完全随机化设计Composite event, 联合事件Composite events, 复合事件Concavity, 凹性Conditional expectation, 条件期望Conditional likelihood, 条件似然Conditional probability, 条件概率Conditionally linear, 依条件线性Confidence interval, 置信区间Confidence limit, 置信限Confidence lower limit, 置信下限Confidence upper limit, 置信上限Confirmatory Factor Analysis , 验证性因子分析Confirmatory research, 证实性实验研究Confounding factor, 混杂因素Conjoint, 联合分析Consistency, 相合性Consistency check, 一致性检验Consistent asymptotically normal estimate, 相合渐近正态估计Consistent estimate, 相合估计Constrained nonlinear regression, 受约束非线性回归Constraint, 约束Contaminated distribution, 污染分布Contaminated Gausssian, 污染高斯分布Contaminated normal distribution, 污染正态分布Contamination, 污染Contamination model, 污染模型Contingency table, 列联表Contour, 边界线Contribution rate, 贡献率Control, 对照, 质量控制图Controlled experiments, 对照实验Conventional depth, 常规深度Convolution, 卷积Corrected factor, 校正因子Corrected mean, 校正均值Correction coefficient, 校正系数Correctness, 正确性Correlation coefficient, 相关系数Correlation, 相关性Correlation index, 相关指数Correspondence, 对应Counting, 计数Counts, 计数/频数Covariance, 协方差Covariant, 共变Cox Regression, Cox回归Criteria for fitting, 拟合准则Criteria of least squares, 最小二乘准则Critical ratio, 临界比Critical region, 拒绝域Critical value, 临界值Cross-over design, 交叉设计Cross-section analysis, 横断面分析Cross-section survey, 横断面调查Crosstabs , 交叉表Crosstabs 列联表分析Cross-tabulation table, 复合表Cube root, 立方根Cumulative distribution function, 分布函数Cumulative probability, 累计概率Curvature, 曲率/弯曲Curvature, 曲率Curve Estimation, 曲线拟合Curve fit , 曲线拟和Curve fitting, 曲线拟合Curvilinear regression, 曲线回归Curvilinear relation, 曲线关系Cut-and-try method, 尝试法Cycle, 周期Cyclist, 周期性D test, D检验Data acquisition, 资料收集Data bank, 数据库Data capacity, 数据容量Data deficiencies, 数据缺乏Data handling, 数据处理Data manipulation, 数据处理Data processing, 数据处理Data reduction, 数据缩减Data set, 数据集Data sources, 数据来源Data transformation, 数据变换Data validity, 数据有效性Data-in, 数据输入Data-out, 数据输出Dead time, 停滞期Degree of freedom, 自由度Degree of precision, 精密度Degree of reliability, 可靠性程度Degression, 递减Density function, 密度函数Density of data points, 数据点的密度Dependent variable, 应变量/依变量/因变量Dependent variable, 因变量Depth, 深度Derivative matrix, 导数矩阵Derivative-free methods, 无导数方法Design, 设计Determinacy, 确定性Determinant, 行列式Determinant, 决定因素Deviation, 离差Deviation from average, 离均差Diagnostic plot, 诊断图Dichotomous variable, 二分变量Differential equation, 微分方程Direct standardization, 直接标准化法Direct Oblimin, 斜交旋转Discrete variable, 离散型变量DISCRIMINANT, 判断Discriminant analysis, 判别分析Discriminant coefficient, 判别系数Discriminant function, 判别值Dispersion, 散布/分散度Disproportional, 不成比例的Disproportionate sub-class numbers, 不成比例次级组含量Distribution free, 分布无关性/免分布Distribution shape, 分布形状Distribution-free method, 任意分布法Distributive laws, 分配律Disturbance, 随机扰动项Dose response curve, 剂量反应曲线Double blind method, 双盲法Double blind trial, 双盲试验Double exponential distribution, 双指数分布Double logarithmic, 双对数Downward rank, 降秩Dual-space plot, 对偶空间图DUD, 无导数方法Duncan's new multiple range method, 新复极差法/Duncan新法Error Bar, 均值相关区间图Effect, 实验效应Eigenvalue, 特征值Eigenvector, 特征向量Ellipse, 椭圆Empirical distribution, 经验分布Empirical probability, 经验概率单位Enumeration data, 计数资料Equal sun-class number, 相等次级组含量Equally likely, 等可能Equivariance, 同变性Error, 误差/错误Error of estimate, 估计误差Error type I, 第一类错误Error type II, 第二类错误Estimand, 被估量Estimated error mean squares, 估计误差均方Estimated error sum of squares, 估计误差平方和Euclidean distance, 欧式距离Event, 事件Event, 事件Exceptional data point, 异常数据点Expectation plane, 期望平面Expectation surface, 期望曲面Expected values, 期望值Experiment, 实验Experimental sampling, 试验抽样Experimental unit, 试验单位Explained variance (已说明方差)Explanatory variable, 说明变量Exploratory data analysis, 探索性数据分析Explore Summarize, 探索-摘要Exponential curve, 指数曲线Exponential growth, 指数式增长EXSMOOTH, 指数平滑方法Extended fit, 扩充拟合Extra parameter, 附加参数Extrapolation, 外推法Extreme observation, 末端观测值Extremes, 极端值/极值F distribution, F分布F test, F检验Factor, 因素/因子Factor analysis, 因子分析Factor Analysis, 因子分析Factor score, 因子得分Factorial, 阶乘Factorial design, 析因试验设计False negative, 假阴性False negative error, 假阴性错误Family of distributions, 分布族Family of estimators, 估计量族Fanning, 扇面Fatality rate, 病死率Field investigation, 现场调查Field survey, 现场调查Finite population, 有限总体Finite-sample, 有限样本First derivative, 一阶导数First principal component, 第一主成分First quartile, 第一四分位数Fisher information, 费雪信息量Fitted value, 拟合值Fitting a curve, 曲线拟合Fixed base, 定基Fluctuation, 随机起伏Forecast, 预测Four fold table, 四格表Fourth, 四分点Fraction blow, 左侧比率Fractional error, 相对误差Frequency, 频率Frequency polygon, 频数多边图Frontier point, 界限点Function relationship, 泛函关系Gamma distribution, 伽玛分布Gauss increment, 高斯增量Gaussian distribution, 高斯分布/正态分布Gauss-Newton increment, 高斯-牛顿增量General census, 全面普查Generalized least squares, 综合最小平方法GENLOG (Generalized liner models), 广义线性模型Geometric mean, 几何平均数Gini's mean difference, 基尼均差GLM (General liner models), 通用线性模型Goodness of fit, 拟和优度/配合度Gradient of determinant, 行列式的梯度Graeco-Latin square, 希腊拉丁方Grand mean, 总均值Gross errors, 重大错误Gross-error sensitivity, 大错敏感度Group averages, 分组平均Grouped data, 分组资料Guessed mean, 假定平均数Half-life, 半衰期Hampel M-estimators, 汉佩尔M估计量Happenstance, 偶然事件Harmonic mean, 调和均数Hazard function, 风险均数Hazard rate, 风险率Heading, 标目Heavy-tailed distribution, 重尾分布Hessian array, 海森立体阵Heterogeneity, 不同质Heterogeneity of variance, 方差不齐Hierarchical classification, 组内分组Hierarchical clustering method, 系统聚类法High-leverage point, 高杠杆率点High-Low, 低区域图Higher Order Interaction Effects,高阶交互作用HILOGLINEAR, 多维列联表的层次对数线性模型Hinge, 折叶点Histogram, 直方图Historical cohort study, 历史性队列研究Holes, 空洞HOMALS, 多重响应分析Homogeneity of variance, 方差齐性Homogeneity test, 齐性检验Huber M-estimators, 休伯M估计量Hyperbola, 双曲线Hypothesis testing, 假设检验Hypothetical universe, 假设总体Image factoring,, 多元回归法Impossible event, 不可能事件Independence, 独立性Independent variable, 自变量Index, 指标/指数Indirect standardization, 间接标准化法Individual, 个体Inference band, 推断带Infinite population, 无限总体Infinitely great, 无穷大Infinitely small, 无穷小Influence curve, 影响曲线Information capacity, 信息容量Initial condition, 初始条件Initial estimate, 初始估计值Initial level, 最初水平Interaction, 交互作用Interaction terms, 交互作用项Intercept, 截距Interpolation, 内插法Interquartile range, 四分位距Interval estimation, 区间估计Intervals of equal probability, 等概率区间Intrinsic curvature, 固有曲率Invariance, 不变性Inverse matrix, 逆矩阵Inverse probability, 逆概率Inverse sine transformation, 反正弦变换Iteration, 迭代Jacobian determinant, 雅可比行列式Joint distribution function, 分布函数Joint probability, 联合概率Joint probability distribution, 联合概率分布K-Means Cluster逐步聚类分析K means method, 逐步聚类法Kaplan-Meier, 评估事件的时间长度Kaplan-Merier chart,Kaplan-Merier图Kendall's rank correlation, Kendall等级相关Kinetic, 动力学Kolmogorov-Smirnove test, 柯尔莫哥洛夫-斯米尔诺夫检验Kruskal and Wallis test, Kruskal及Wallis检验/多样本的秩和检验/H检验Kurtosis, 峰度Lack of fit, 失拟Ladder of powers, 幂阶梯Lag, 滞后Large sample, 大样本Large sample test, 大样本检验Latin square, 拉丁方Latin square design, 拉丁方设计Leakage, 泄漏Least favorable configuration, 最不利构形Least favorable distribution, 最不利分布Least significant difference, 最小显着差法Least square method, 最小二乘法Least Squared Criterion,最小二乘方准则Least-absolute-residuals estimates, 最小绝对残差估计Least-absolute-residuals fit, 最小绝对残差拟合Least-absolute-residuals line, 最小绝对残差线Legend, 图例L-estimator, L估计量L-estimator of location, 位置L估计量L-estimator of scale, 尺度L估计量Level, 水平Leveage Correction,杠杆率校正Life expectance, 预期期望寿命Life table, 寿命表Life table method, 生命表法Light-tailed distribution, 轻尾分布Likelihood function, 似然函数Likelihood ratio, 似然比line graph, 线图Linear correlation, 直线相关Linear equation, 线性方程Linear programming, 线性规划Linear regression, 直线回归Linear Regression, 线性回归Linear trend, 线性趋势Loading, 载荷Location and scale equivariance, 位置尺度同变性Location equivariance, 位置同变性Location invariance, 位置不变性Location scale family, 位置尺度族Log rank test, 时序检验Logarithmic curve, 对数曲线Logarithmic normal distribution, 对数正态分布Logarithmic scale, 对数尺度Logarithmic transformation, 对数变换Logic check, 逻辑检查Logistic distribution, 逻辑斯特分布Logit transformation, Logit 转换LOGLINEAR, 多维列联表通用模型Lognormal distribution, 对数正态分布Lost function, 损失函数Low correlation, 低度相关Lower limit, 下限Lowest-attained variance, 最小可达方差LSD, 最小显着差法的简称Lurking variable, 潜在变量Main effect, 主效应Major heading, 主辞标目Marginal density function, 边缘密度函数Marginal probability, 边缘概率Marginal probability distribution, 边缘概率分布Matched data, 配对资料Matched distribution, 匹配过分布Matching of distribution, 分布的匹配Matching of transformation, 变换的匹配Mathematical expectation, 数学期望Mathematical model, 数学模型Maximum L-estimator, 极大极小L 估计量Maximum likelihood method, 最大似然法Mean, 均数Mean squares between groups, 组间均方Mean squares within group, 组内均方Means (Compare means), 均值-均值比较Median, 中位数Median effective dose, 半数效量Median lethal dose, 半数致死量Median polish, 中位数平滑Median test, 中位数检验Minimal sufficient statistic, 最小充分统计量Minimum distance estimation, 最小距离估计Minimum effective dose, 最小有效量Minimum lethal dose, 最小致死量Minimum variance estimator, 最小方差估计量MINITAB, 统计软件包Minor heading, 宾词标目Missing data, 缺失值Model specification, 模型的确定Modeling Statistics , 模型统计Models for outliers, 离群值模型Modifying the model, 模型的修正Modulus of continuity, 连续性模Morbidity, 发病率Most favorable configuration, 最有利构形MSC(多元散射校正)Multidimensional Scaling (ASCAL), 多维尺度/多维标度Multinomial Logistic Regression , 多项逻辑斯蒂回归Multiple comparison, 多重比较Multiple correlation , 复相关Multiple covariance, 多元协方差Multiple linear regression, 多元线性回归Multiple response , 多重选项Multiple solutions, 多解Multiplication theorem, 乘法定理Multiresponse, 多元响应Multi-stage sampling, 多阶段抽样Multivariate T distribution, 多元T分布Mutual exclusive, 互不相容Mutual independence, 互相独立Natural boundary, 自然边界Natural dead, 自然死亡Natural zero, 自然零Negative correlation, 负相关Negative linear correlation, 负线性相关Negatively skewed, 负偏Newman-Keuls method, q检验NK method, q检验No statistical significance, 无统计意义Nominal variable, 名义变量Nonconstancy of variability, 变异的非定常性Nonlinear regression, 非线性相关Nonparametric statistics, 非参数统计Nonparametric test, 非参数检验Nonparametric tests, 非参数检验Normal deviate, 正态离差Normal distribution, 正态分布Normal equation, 正规方程组Normal P-P, 正态概率分布图Normal Q-Q, 正态概率单位分布图Normal ranges, 正常范围Normal value, 正常值Normalization 归一化Nuisance parameter, 多余参数/讨厌参数Null hypothesis, 无效假设Numerical variable, 数值变量Objective function, 目标函数Observation unit, 观察单位Observed value, 观察值One sided test, 单侧检验One-way analysis of variance, 单因素方差分析Oneway ANOVA , 单因素方差分析Open sequential trial, 开放型序贯设计Optrim, 优切尾Optrim efficiency, 优切尾效率Order statistics, 顺序统计量Ordered categories, 有序分类Ordinal logistic regression , 序数逻辑斯蒂回归Ordinal variable, 有序变量Orthogonal basis, 正交基Orthogonal design, 正交试验设计Orthogonality conditions, 正交条件ORTHOPLAN, 正交设计Outlier cutoffs, 离群值截断点Outliers, 极端值OVERALS , 多组变量的非线性正规相关Overshoot, 迭代过度Paired design, 配对设计Paired sample, 配对样本Pairwise slopes, 成对斜率Parabola, 抛物线Parallel tests, 平行试验Parameter, 参数Parametric statistics, 参数统计Parametric test, 参数检验Pareto, 直条构成线图(又称佩尔托图)Partial correlation, 偏相关Partial regression, 偏回归Partial sorting, 偏排序Partials residuals, 偏残差Pattern, 模式PCA(主成分分析)Pearson curves, 皮尔逊曲线Peeling, 退层Percent bar graph, 百分条形图Percentage, 百分比Percentile, 百分位数Percentile curves, 百分位曲线Periodicity, 周期性Permutation, 排列P-estimator, P估计量Pie graph, 构成图,饼图Pitman estimator, 皮特曼估计量Pivot, 枢轴量Planar, 平坦Planar assumption, 平面的假设PLANCARDS, 生成试验的计划卡PLS(偏最小二乘法)Point estimation, 点估计Poisson distribution, 泊松分布Polishing, 平滑Polled standard deviation, 合并标准差Polled variance, 合并方差Polygon, 多边图Polynomial, 多项式Polynomial curve, 多项式曲线Population, 总体Population attributable risk, 人群归因危险度Positive correlation, 正相关Positively skewed, 正偏Posterior distribution, 后验分布Power of a test, 检验效能Precision, 精密度Predicted value, 预测值Preliminary analysis, 预备性分析Principal axis factoring,主轴因子法Principal component analysis, 主成分分析Prior distribution, 先验分布Prior probability, 先验概率Probabilistic model, 概率模型probability, 概率Probability density, 概率密度Product moment, 乘积矩/协方差Profile trace, 截面迹图Proportion, 比/构成比Proportion allocation in stratified random sampling, 按比例分层随机抽样Proportionate, 成比例Proportionate sub-class numbers, 成比例次级组含量Prospective study, 前瞻性调查Proximities, 亲近性Pseudo F test, 近似F检验Pseudo model, 近似模型Pseudosigma, 伪标准差Purposive sampling, 有目的抽样QR decomposition, QR分解Quadratic approximation, 二次近似Qualitative classification, 属性分类Qualitative method, 定性方法Quantile-quantile plot, 分位数-分位数图/Q-Q图Quantitative analysis, 定量分析Quartile, 四分位数Quick Cluster, 快速聚类Radix sort, 基数排序Random allocation, 随机化分组Random blocks design, 随机区组设计Random event, 随机事件Randomization, 随机化Range, 极差/全距Rank correlation, 等级相关Rank sum test, 秩和检验Rank test, 秩检验Ranked data, 等级资料Rate, 比率Ratio, 比例Raw data, 原始资料Raw residual, 原始残差Rayleigh's test, 雷氏检验Rayleigh's Z, 雷氏Z值Reciprocal, 倒数Reciprocal transformation, 倒数变换Recording, 记录Redescending estimators, 回降估计量Reducing dimensions, 降维Re-expression, 重新表达Reference set, 标准组Region of acceptance, 接受域Regression coefficient, 回归系数Regression sum of square, 回归平方和Rejection point, 拒绝点Relative dispersion, 相对离散度Relative number, 相对数Reliability, 可靠性Reparametrization, 重新设置参数Replication, 重复Report Summaries, 报告摘要Residual sum of square, 剩余平方和residual variance (剩余方差) Resistance, 耐抗性Resistant line, 耐抗线Resistant technique, 耐抗技术R-estimator of location, 位置R估计量R-estimator of scale, 尺度R估计量Retrospective study, 回顾性调查Ridge trace, 岭迹Ridit analysis, Ridit分析Rotation, 旋转Rounding, 舍入Row, 行Row effects, 行效应Row factor, 行因素RXC table, RXC表Sample, 样本Sample regression coefficient, 样本回归系数Sample size, 样本量Sample standard deviation, 样本标准差Sampling error, 抽样误差SAS(Statistical analysis system ), SAS 统计软件包Scale, 尺度/量表Scatter diagram, 散点图Schematic plot, 示意图/简图Score test, 计分检验Screening, 筛检SEASON, 季节分析Second derivative, 二阶导数Second principal component, 第二主成分SEM (Structural equation modeling), 结构化方程模型Semi-logarithmic graph, 半对数图Semi-logarithmic paper, 半对数格纸Sensitivity curve, 敏感度曲线Sequential analysis, 贯序分析Sequence, 普通序列图Sequential data set, 顺序数据集Sequential design, 贯序设计Sequential method, 贯序法Sequential test, 贯序检验法Serial tests, 系列试验Short-cut method, 简捷法Sigmoid curve, S形曲线Sign function, 正负号函数Sign test, 符号检验Signed rank, 符号秩Significant Level, 显着水平Significance test, 显着性检验Significant figure, 有效数字Simple cluster sampling, 简单整群抽样Simple correlation, 简单相关Simple random sampling, 简单随机抽样Simple regression, 简单回归simple table, 简单表Sine estimator, 正弦估计量Single-valued estimate, 单值估计Singular matrix, 奇异矩阵Skewed distribution, 偏斜分布Skewness, 偏度Slash distribution, 斜线分布Slope, 斜率Smirnov test, 斯米尔诺夫检验Source of variation, 变异来源Spearman rank correlation, 斯皮尔曼等级相关Specific factor, 特殊因子Specific factor variance, 特殊因子方差Spectra , 频谱Spherical distribution, 球型正态分布Spread, 展布SPSS(Statistical package for the social science), SPSS统计软件包Spurious correlation, 假性相关Square root transformation, 平方根变换Stabilizing variance, 稳定方差Standard deviation, 标准差Standard error, 标准误Standard error of difference, 差别的标准误Standard error of estimate, 标准估计误差Standard error of rate, 率的标准误Standard normal distribution, 标准正态分布Standardization, 标准化Starting value, 起始值Statistic, 统计量Statistical control, 统计控制Statistical graph, 统计图Statistical inference, 统计推断Statistical table, 统计表Steepest descent, 最速下降法Stem and leaf display, 茎叶图Step factor, 步长因子Stepwise regression, 逐步回归Storage, 存Strata, 层(复数)Stratified sampling, 分层抽样Stratified sampling, 分层抽样Strength, 强度Stringency, 严密性Structural relationship, 结构关系Studentized residual, 学生化残差/t化残差Sub-class numbers, 次级组含量Subdividing, 分割Sufficient statistic, 充分统计量Sum of products, 积和Sum of squares, 离差平方和Sum of squares about regression, 回归平方和Sum of squares between groups, 组间平方和Sum of squares of partial regression, 偏回归平方和Sure event, 必然事件Survey, 调查Survival, 生存分析Survival rate, 生存率Suspended root gram, 悬吊根图Symmetry, 对称Systematic error, 系统误差Systematic sampling, 系统抽样Tags, 标签Tail area, 尾部面积Tail length, 尾长Tail weight, 尾重Tangent line, 切线Target distribution, 目标分布Taylor series, 泰勒级数Test(检验)Test of linearity, 线性检验Tendency of dispersion, 离散趋势Testing of hypotheses, 假设检验Theoretical frequency, 理论频数Time series, 时间序列Tolerance interval, 容忍区间Tolerance lower limit, 容忍下限Tolerance upper limit, 容忍上限Torsion, 扰率Total sum of square, 总平方和Total variation, 总变异Transformation, 转换Treatment, 处理Trend, 趋势Trend of percentage, 百分比趋势Trial, 试验Trial and error method, 试错法Tuning constant, 细调常数Two sided test, 双向检验Two-stage least squares, 二阶最小平方Two-stage sampling, 二阶段抽样Two-tailed test, 双侧检验Two-way analysis of variance, 双因素方差分析Two-way table, 双向表Type I error, 一类错误/α错误Type II error, 二类错误/β错误UMVU, 方差一致最小无偏估计简称Unbiased estimate, 无偏估计Unconstrained nonlinear regression , 无约束非线性回归Unequal subclass number, 不等次级组含量Ungrouped data, 不分组资料Uniform coordinate, 均匀坐标Uniform distribution, 均匀分布Uniformly minimum varianceunbiased estimate, 方差一致最小无偏估计Unit, 单元Unordered categories, 无序分类Unweighted least squares, 未加权最小平方法Upper limit, 上限Upward rank, 升秩Vague concept, 模糊概念Validity, 有效性VARCOMP (Variance component estimation), 方差元素估计Variability, 变异性Variable, 变量Variance, 方差Variation, 变异Varimax orthogonal rotation, 方差最大正交旋转Volume of distribution, 容积W test, W检验Weibull distribution, 威布尔分布Weight, 权数Weighted Chi-square test, 加权卡方检验/Cochran检验Weighted linear regression method, 加权直线回归Weighted mean, 加权平均数Weighted mean square, 加权平均方差Weighted sum of square, 加权平方和Weighting coefficient, 权重系数Weighting method, 加权法W-estimation, W估计量W-estimation of location, 位置W估计量Width, 宽度Wilcoxon paired test, 威斯康星配对法/配对符号秩和检验Wild point, 野点/狂点Wild value, 野值/狂值Winsorized mean, 缩尾均值Withdraw, 失访Youden's index, 尤登指数Z test, Z检验Zero correlation, 零相关Z-transformation, Z变换Z-transformation, Z变换。
沥青混凝土的细观开裂模拟方法研究
本研究的目的是建立一个更有效、更精确的细观沥青混凝土模型研究其开裂 问题。本文采用两种数值建模方法分析沥青混凝土的开裂问题。第一种是随机骨 料分布及嵌入粘结单元有限元模型。本方法将沥青混凝土分为四个重要组成部分: 骨料、沥青砂浆、界面过渡区和初始缺陷。根据由 Full 和 Thompson 开发的级配 曲线,将骨料颗粒模拟成大小不同的随机分布多边形。采用了一些有效的方法来 提高骨料投递的成功率。对于初始缺陷,本研究仅考虑空隙。在沥青砂浆的初始 网格内,沿骨料与沥青砂浆的界面插入零厚粘性单元,模拟混凝土的开裂过程。 修正了以往的开裂初始准则和牵引分离规律,并用以描述粘结构件的破坏行为。 基于 python 语言,开发了骨料投递、内聚力单元插入和修正的构造本构模型的 用户自定义程序,并将其嵌入到商业有限元软件包 abaqus 中。通过与试验结果 的比较,验证了所提出的细观模型的有效性和准确性,并研究了细观结构对沥青 混凝土的宏观性能的影响。第二种方法采用基于 Voronoi 多边形的细观刚体弹簧 法进行数值建模,在第一种方法的基础上,采用随机骨料投递技术随机投放圆形 骨料,然后根据骨料形心点生成 Voronoi 网格,过 Voronoi 单元网格边界的几何 形状寻找骨料之间的相互作用关系,骨料间的粘结材料被浓缩为骨料间的界面弹 簧,其刚度由粘结材料的厚度定义。通过二维和三维数值单轴压缩实验进行验证 其有效性。 关键词:细观沥青混凝土;随机骨料投递算法;粘结单元嵌入算法;Voronoi 多 边形;细观刚体弹簧法
i
ABSTRACT
The purpose of this study is to establish a more effective and accurate mesoscopic asphalt mixture model to study its cracking problem.In this paper, two numerical modeling methods are used to analyze the cracking of asphalt mixture.The first is the random aggregate distribution and the embedded bond element finite element model.This method divides asphalt mixture into four important components: aggregate, asphalt mortar, interface transition zone and initial defect.Aggregate particles were simulated as randomly distributed polygons of different sizes according to the grading curves developed by Full and Thompson.Some effective methods are adopted to improve the success rate of aggregate delivery. For the initial defects, only the void was considered in this study. In the initial grid of asphalt mortar, zero-thickness viscous unit is inserted along the interface between aggregate and asphalt mortar to simulate the cracking process of concrete.The former cracking initial criterion and traction separation rule are modified to describe the failure behavior of bonded components. Based on the python language, a user-defined program for aggregate delivery, cohesive force unit insertion and modification of constitutive model construction was developed and embedded in abaqus, a commercial finite element software package. Compared with the experimental results, the validity and accuracy of the proposed meso-structure model are verified, and the effect of meso-structure on the macro performance of asphalt mixture is studied. The second method based on Voronoi polygon of the mesoscopic numerical model based on the rigid spring method and random on the circular aggregate by random aggregate delivering technology, then Voronoi grids were generated according to the aggregate centroid and a Voronoi unit grid boundary geometry for the interaction relationship between aggregate and bond between aggregate material is concentrated to aggregate the interface between the spring, defined by the thickness of the bonding material, The validity of the method is verified by two-dimensional and three-dimensional numerical uniaxial compression experiments.
《As物理历年真题分类汇编》
Chapter 3 ForcesSection A Multiple Choice1 A cylindrical block of wood has a cross-sectional area A and weightW. It is totally immersed in water with its axis vertical. The blockexperiences pressures p t and p b at its top and bottom surfacesrespectively.Which of the following expressions is equal to the upthrust on theblock?(02s)A (p b - p t)A + WB (p b - p t)C (p b - p t)AD (p b - p t)A - W2 The vector diagram shows three coplanar forces acting on an object atP.(02s)The magnitude of the resultant of these three forces is 1 N.What is the direction of this resultant?A ↓B ↘C ↙D ↗3 A submarine descends vertically at constant velocity. The three forcesacting on the submarine are viscous drag, upthrust and weight.(02s)Which relationship between their magnitudes is correct?A weight < dragB weight = dragC weight < upthrustD weight > upthrust4 A ruler of length 0.30mis pivoted at its centre. Equal and oppositeforces of magnitude 2.0N are applied to the ends of the ruler, creatinga couple as shown.What is the magnitude of the torque of the couple on the ruler whenit is in the position shown?(02s)A 0.23NmB 0.39NmC 0.46NmD 0.60Nm5 The diagram shows two vectors X and Y.(02s)In which vector triangle does the vector Z show the magnitude anddirection of vector X – Y?6 uniform metre rule of mass 100 g is supported by a knife-edge at the40 cm mark and a string at the 100 cm mark. The string passes rounda frictionless pulley and carries a mass of 20 g as shown in thediagram. (02w)At which mark on the rule must a 50 g mass be suspended so that therule balances?A 4 cmB 36 cmC 44 cmD 96 cm7 The diagrams represent systems of coplanar forces acting at a point.The lengths of the force vectors represent the magnitudes of the forces.(02w)Which system of forces is in equilibrium?8 What is meant by the weight of an object?(02w)A the gravitational field acting on the objectB the gravitational force acting on the objectC the mass of the object multiplied by gravityD the object’s mass multiplied by its acceleration9 Which of the following pairs of forces, acting on a circular object,constitutes a couple?(02w)10 A pendulum bob is held stationary by a horizontal force H. The threeforces acting on the bob are shown in the diagram.(02w)The tension in the string of the pendulum is T. The weight of thependulum bob is W.Which statement is correct?A H = T cos30B T= H sin30C W = T cos30D W = T sin3011 Two forces, each of 10 N, act at a point P as shown in the diagram.The angle between the directions of the forces is 120°.(03s)What is the magnitude of the resultant force?A 5NB 10NC 17ND 20N12 The diagram shows four forces applied to a circular object.(03s)Which of the following describes the resultant force and resultanttorque on the object?13and sidewaysforce due to the wind, as shown in the diagram. (03s)What is the vertical component of the resultant force on the balloon?A 500 NB 1000 NC 10 000 ND 10 500N14 A force of 5N may be represented by two perpendicular components OYand OX as shown in the diagram, which is not drawn to scale. (03w)OY is of magnitude 3N.What is the magnitude of OX?A 2NB 3NC 4ND 5N15 A hinged door is held closed in the horizontal position by a cable.Three forces act on the door: the weight W of the door, the tensionT in the cable, and the force H at the hinge. (03w)Which list gives the three forces in increasing order of magnitude?A H,T,WB T,H,WC W,H,TD W,T,H16 A spanner is used to tighten a nut as shown. (03w)A force F is applied at right-angles to the spanner at a distance of0.25 m from the centre of the nut. When the nut is fully tightened,the applied force is 200 N.What is the resistive torque, in an anticlockwise direction,preventing further tightening?A 8NmB 25NmC 50NmD 800Nm17 Two parallel forces, each of magnitude F, act on a body as shown. (03w)What is the magnitude of the torque on the body produced by these forces?A FdB FsC 2FdD 2Fs18 The diagram shows a sign of weight 20 N suspended from a pole, attachedto a wall. The pole is kept in equilibrium by a wire attached at pointX of the pole.(04s)The force exerted by the pole at point X is F, and the tension in thewire is 40 N.Which diagram represents the three forces acting at point X?19 A uniform beam of weight 50 N is 3.0 m long and is supported on a pivotsituated 1.0 m from one end. When a load of weight W is hung from thatend, the beam is in equilibrium, as shown in the diagram.(04s)What is the value of W ?A 25 NB 50 NC 75 ND 100 N20 Which two vector diagrams represent forces in equilibrium?(04w)A P and QB Q and RC R and SD S and P21 A long uniform beam is pivoted at one end. A force of 300 N is appliedto hold the beam horizontally.(04w)What is the weight of the beam?A 300 NB 480 NC 600 ND 960 N22The vector diagram shows three coplanar forces acting on an objectatP.(5s)The magnitude of the resultant of these three forces is 1 N.What is the direction of this resultant?A↓ B ↘C↙ D↗23An L-shaped rigid lever arm is pivoted at point P.(05s)Three forces act on the lever arm, as shown in the diagram.What is the magnitude of the resultant moment of these forces aboutpoint P?A 30 NmB 35 NmC 50 NmD 90 Nm24What is the centre of gravity of an object?(05s)A the geometrical centre of the objectB the point about which the total torque is zeroC the point at which the weight of the object may be considered toactD the point through which gravity acts25 A stone is projected horizontally in a vacuum and moves along a pathas shown. X is a point on this path. XV and XH are vertical andhorizontal lines respectively through X. XT is the tangent to the pathat X. (05w)Along which direction or directions do forces act on the stone at X?A XVB XHC XV and XHD XT26 A uniform beam of weight 100 N is pivoted at P as shown. Weights of10 N and 20 N are attached to its ends.(05w)The length of the beam is marked off at 0.1 m intervals.At which point should a further weight of 20 N be attached to achieveequilibrium?27 The diagram shows four forces applied to a circular object.(05w)Which of the following describes the resultant force and resultanttorque on the object?28tenth of that on the surface of planet Q. (05w)On the surface of P, a body has its mass measured to be 1.0 kg andits weight measured to be 1.0 N.What results are obtained for measurements of the mass and weight ofthe same body on the surface of planet Q?29 d from a pivot. The forceacts at an angle θ to a line perpendicular to the beam. (06s)Which combination will cause the largest turning effect about thepivot?30m i d -p o int.(6s)Weights are hung from two points of the bar as shown in the diagram.To maintain horizontal equilibrium, a couple is applied to the bar.What is the torque and direction of this couple?A 40 Nm clockwiseB 40 Nm anticlockwiseC 80 Nm clockwiseD 80 Nm anticlockwise31The diagrams show three forces acting on a body.(06s)In which diagram is the body in equilibrium?32 A rigid circular disc of radius r has its centre at X. A number offorces of equal magnitude F act at the edge of the disc. All the forcesare in the plane of the disc.(06w)Which arrangement of forces provides a moment of magnitude 2Fr aboutX?33Three coplanar forces, each of magnitude 10 N, act through the same point of a body in the directions shown.(06w)What is the magnitude of the resultant force?A 0 NB 1.3 NC 7.3 ND 10 N34 Which force is caused by a pressure difference?(06w)A frictionB upthrustC viscous forceD weight35 Two 8.0 N forces act at each end of a beam of length 0.60m. The forcesare parallel and act in opposite directions. The angle between theforces and the beam is 60°.(07s)What is the torque of the couple exerted on the beam?A 2.4NmB 4.2NmC 4.8NmD 9.6Nm36 What is meant by the weight of an object?(07s)A the gravitational field acting on the objectB the gravitational force acting on the objectC the mass of the object multiplied by gravityD the object’s mass multiplied by its acceleration37 The diagram shows a plan view of a door which requires a moment of12Nm to open it.(07w)What is the minimum force that must be applied at the door’s midpointto ensure it opens?A 4.8NB 9.6NC 15ND 30N38 The symbol represents the acceleration of free fall.(07w)Which of these statements is correct?A is gravity.B is reduced by air resistance.C is the ratio weight / mass.D is the weight of an object.39 Which two vector diagrams represent forces in equilibrium?(07w)A P and QB Q and RC R and SD S and P40 The diagram shows a diving-board held in position by two rods X andY.(01s)Which additional forces do these rods exert on the board when a diverof weight 600N stands on the right-hand end?41 A thread. A strong windblows horizontally. exerting a constant force F on the ball. Thethread makes an angle θ to the vertical as shown. (01s)Which equation correctly relates θ, F and W?A cos θ = FIWB sin θ = FIWC tan θ = FIWD tan θ = WIF42 Two co-planar forces act on the rim of a wheel. The forces are equali nmagnitude.(1s) Which arrangement of forces is a couple?43 Two forces act on a circular disc as shown.(01s)Which diagram shows the line of action of the resultant force?44 What is the definition of force?(01w)A the mass of a body multiplied by its accelerationB the power input to a body divided by its velocityC the rate of change of momentum of a bodyD the work done on a body divided by its displacement45 A street lamp is fixed to a wall by a metal rod and a cable. (01w)Which vector triangle represents the forces acting t point P?46 A wheel of radius 0.70m has a couple applied to it as shown. (01w)What is the torque on the wheel?A 0B 28NmC 56NmD 112Nm47 Two forces X and Y act at a point P as shown. The lengths of the linesrepresent the magnitudes of the forces.(01w)Which vector diagram shows the resultant R of these two forces?48 A trailer of weight 30 KN is hitched to a cab at the point X as shownin the diagram below.(01w)What upward force is exerted by the cab on the trailer at the pointX?A 3KnB 15kNC 30kND 60kN49 A ball of weight W slides along a smooth horizontal surface until itfalls off the edge at time T.(01w)Which graph represents how the resultant vertical downwards force F, acting on the ball, varies with time t as the ball moves from positionX to position Y?Section B Structured Questions1 (a) State the conditions necessary for the equilibrium of a body whichis acted up on by a number of forces.(01w)1 ............................................................................................................................................2........................................... .............................................................. (2)(b) Three identical springs S1, S2and S3are attached to a point A suchthat the angle between any two of the spring is 120°, as shown in Fig.3.1.Fig.3.1The springs have extended elastically and the extensions of S1andS 2 are x. Determine, in terms of x, the extension of S3such thatthe system of springs is in equilibrium. Explain your working.extension of S3= (3)(c) The lid of a box is hinged along one edge E, as shown in Fig.3.2.The lid is held open by means of a horizontal cord attached to the edge F of the lid. The centre of gravity of the lid is at pointC.On Fig.3.2 draw(i) an arrow, labelled W, to represent the weight of the lid,(ii) an arrow, labelled T, to represent the tension in the cord acting on the lid,(iii) an arrow, labelled R, to represet the force of the hinge on the lid.[3]2 (a)Explain what is meant by the centre of gravity of an object. (02s)............................................................................................................................................................................................................ (2)(b)A non-uniform plank of wood XY is 2.50 m long and weighs 950 N.Force-meters (spring balances) A and B are attached to the plankat a distance of 0.40 m from each end, as illustrated in Fig. 3.1.When the plank is horizontal, force-meter A records 570 N.(i) Calculate the reading on force-meter B.reading= ................ N(ii) On Fig. 3.1, mark a likely position for the centre of gravity ofthe plank.(iii)Determine the distance of the centre of gravity from the end Xof the plank.distance= ............. m[6]3 (a) Define the moment of a force.(03w)............................................................................................................................ (2)(b) State the two conditions necessry for a body to be in equilibrium.1. ...........................................................................................................................................2. ......................................................................................................................... (2)(c) Two parallel atrings S1 and S2are attached to a disc of diameter12cm, as shown in Fig.3.1.The disc is free to rotate about an axis normal to its plane. The axis passesthrough the centre C of the disc.A lever of length 30cm is attached to the disc. When a force F isapplied at right angles to the lever at its end, equal forces areproduced in S1 and S2. The disc remains in equilibrium.(i) On Fig.3.1, show the firection of the force in each string thatacts on the disc.[1](ii) For a force F of magnitude 150N, determine1. the moment of force F about the centre of the disc.moment= .....................Nm2. the torque of the couple produced by the forces in thestrings,torque= ...................Nm.3. the force in S1force= ...................N[4]4 (a) Explain what is meant by the centre of gravity of a body. (05w)............................................................................................................................................................................................................ .. (2)(b) An irregularly-shaped piece of cardboard is hung freely from onepoint near its edge, as shown in Fig. 2.1.Explain why the cardboard will come to rest with its centre ofgravity vertically below the pivot. You may draw on Fig. 2.1 ifyou wish............................................................................................................................................................................................................. (2)5 A rod AB is hinged to a wall at A. The rod is held horizontally by meansof a cord BD, attached to the rod at end B and to the wall at D, asshown in Fig. 2.1.(06s)The rod has weight W and the centre of gravity of the rod is at C. Therod is held in equilibrium by a force T in the cord and a force F producedat the hinge.(a) Explain what is meant by(i) the centre of gravity of a body,................................................................................................................................................................................. (2)(ii) the equilibrium of a body.................................................................................................................................................................................................... (2)(b) The line of action of the weight W of the rod passes through thecord at point P.Explain why, for the rod to be in equilibrium, the force F produced at the hinge must also pass through point P............................................................................................................................................... (2)(c) The forces F and T make angles α and β respectively with the rod and AC = 32AB, as shown in Fig. 2.1. Write down equations, in terms of F , W , T , α and β, to represent (i) the resolution of forces horizontally,........................................................... (1)(ii) the resolution of forces vertically,........................................................... (1)(iii) the taking of moments about A............................................................ (1)我从来就不是一个独立的人,也从没有独立生活过,直到来了加国。
流行病学 第2章 疾病的分布
应用: – 用于衡量某一时期,一个地区人群死亡危险性大 小的一个指标。 – 可反映一个地区不同时期人群的健康状况和卫生 保健工作的水平,可为该地区卫生保健工作的需 求和规划提供科学的依据。 – 对于某些病死率高的疾病,死亡率与发病率很接 近,常用作病因探索的指标 – 死亡专率可直接比较,用于病因探索
第1节 描述疾病分布常用的指标
•率与比 •发病指标 •死亡指标 •残疾失能指标
一、率与比
率
说明某种事物发生的频率或强度 某事物实际发生的例数 率 = -----------------------×K 可能发生该事物的总人数
分子是分母的一部分
比
比是两个变量的数值的商,表示分子和分母 之间的数量关系,而不管分子和分母所来自 的总体如何。 分子与分母是两个彼此分离的互相不重叠或 包含的量。
发病率、死亡率:人口构成 病死率:医疗条件、医疗水平-接收病人的疾病严重 程度
标准化法
四、残疾失能指标
(一)潜在减寿年数,potential years of life lost,PYLL 1.定义:某病某年龄组人群死亡者的期望寿命与
实际死亡年龄的差的总和,即死亡造成的寿命 损失。
2.计算方法 e PYLL=∑aidi
散发 (sporadic)
爆发 (outbreak)
流行 (epidemic)
大流行 (pandemic)
1、散发 (sporadic)
发病率呈历年的一般水平,各病例间在发病时间和地点方面 无明显联系,表现为散在发生。 涉及地域: 在范围较大的地区内 此前3年
发病数量: 发病率呈历年一般水平
病例间联系程度: 病例间在发病时间和地点
(五)、续发率
两种模型(英语)
Learning Generative Models via Discriminative ApproachesZhuowen TuLab of Neuro Imaging,UCLAzhuowen.tu@AbstractGenerative model learning is one of the key problems in machine learning and computer vision.Currently the use of generative models is limited due to the difficulty in effec-tively learning them.A new learning framework is proposed in this paper which progressively learns a target genera-tive distribution through discriminative approaches.This framework provides many interesting aspects to the liter-ature.From the generative model side:(1)A reference distribution is used to assist the learning process,which removes the need for a sampling processes in the early stages.(2)The classification power of discriminative ap-proaches,e.g.boosting,is directly utilized.(3)The abil-ity to select/explore features from a large candidate pool allows us to make nearly no assumptions about the train-ing data.From the discriminative model side:(1)This framework improves the modeling capability of discrimina-tive models.(2)It can start with source training data only and gradually“invent”negative samples.(3)We show how sampling schemes can be introduced to discriminative mod-els.(4)The learning procedure helps to tighten the decision boundaries for classification,and therefore,improves ro-bustness.In this paper,we show a variety of applications including texture modeling and classification,non-photo-realistic rendering,learning image statistics/denoising,and face modeling.The framework handles both homogeneous patterns,e.g.textures,and inhomogeneous patterns,e.g. faces,with nearly an identical parameter setting for all the tasks in the learning stage.1.IntroductionGenerative model learning is one of the key problems in machine learning and computer vision.Generative models are desirable as they capture the underlying generation pro-cess of a data population of interest.In the context of image analysis,such a data population might be a texture or an object category.However,it is usually very hard to learn a generative model for data of high dimension since the struc-ture of the data space is largely unknown.A collection of data samples(ensemble)may lie on a very complex mani-fold.Existing generative models include principle compo-nent analysis(PCA)[20],independent component analysis (ICA)[12],and mixture of Gaussians models[4].These models assume simple formation of the data,and they have difficulty in modeling complex patterns of irregular distri-butions.General pattern theory[9],though nice in prin-ciple,requires defining complex operators and rules;how amenable it is to modeling a wide class of image patterns and shapes is stillunclear.Figure1.Image patches sampled at different stages by our algorithm for learning natural image statistics.Discriminative models,often referred to as classification approaches,have been widely used in the literature.Many successful applications have been devised using methods like support vector machines(SVM)[24]or boosting[6].Though these discriminative methods have strong discrim-ination/classification power,their modeling capability is limited since they are focusing on classification boundaries rather than the generation process of the data.Thus,they cannot be used to create(synthesize)samples of interest.Another disadvantage of the existing discriminative models is that they often need both positive and negative training samples,though a single-class classification was proposed in[16]using special kernels.Negative samples may not be obtained easily in some situations,e.g.it is very hard to obtain negative shapes.Nevertheless,situations occur where there is still room to improve the classification result but there are no negatives to use.Recent active learning strategies[1]help this problem slightly by including human subjects in a loop.The existing generative model learning frameworks[11, 3,21,23]have difficulty in capturing patterns of high com-plexity.In this paper,a new learning framework is proposed which progressively learns a target generative distribution via discriminative approaches.The basic idea is to use neg-11-4244-1180-7/07/$25.00 ©2007 IEEEative samples as‘auxiliary’variables(we call them pseudo-negatives),either bootstrapped or sampled from reference distributions,to facilitate the learning process in which dis-criminative models are used.Our method is different from the importance sampling strategy[17]in which a reference distribution is used for sampling.A given a set of image patches are treated as positives samples.We have an image database(5,000natural im-ages)from which pseudo-negative samples are randomly selected.We then use the positives and pseudo-negatives to train a discriminative model,and recursively obtain pseudo-negative samples either by bootstrapping or by sampling. The algorithm converges when the training error is big-ger than a certain threshold,indicating that pseudo-negative samples drawn from the model are similar to the input pos-itive samples.This learning framework provides several in-teresting aspects to the existing generative and discrimina-tive learning literature.For the generative models:1.A reference distribution(image database)is used to assist the gen-erative model learning process.We make use of the image database for bootstrapping pseudo-negative samples which removes the need for sampling processes in the early stages(this was necessary in[11,3,21]).2.The discrimination/classification power of discriminative ap-proaches,e.g.boosting,is directly utilized.3.The ability of selecting/exploring features from a large candidatepool allows us to make nearly no assumptions about the training data.By using both position sensitive Haar features and position insensitive histogram features,the algorithm is able to handle both homogeneous and inhomogeneous patterns.For the discriminative models:1.This framework largely improves the modeling capability of exist-ing discriminative models.Despite some recent efforts in combining discriminative models in the randomfields model[13],discrimina-tive models mostly have been popular for classification.2.Though starting from a reference distribution largely improves theefficiency of our algorithm,our learning framework also works with positive training data only,and gradually invent pseudo-negative samples.Traditional discriminative models always need both pos-itives and negatives.3.We discuss various sampling schemes based on the discriminativemodels.4.Our model can also be viewed as a classification approach.Differ-ent generative models learned are directly comparable if they use the same reference distribution.The progressive learning procedure helps to tighten the decision boundaries for the discriminative mod-els,and therefore,improves their robustness.(we show this in the im-age denoising case in the experiments,and more experiments in other domains will be carried to further illustrate this point.)Three other existing generative models are related to our framework,namely,the induction feature model[3],the MiniMax entropy model[21],and the products of experts model(POE)[11,2,25].These algorithms are somewhat similar in that they are all learning a distribution from an exponential family.A feature selection stage appears in all these methods together with a sampling step to estimate the parameters for combining the features.Our model dif-fers from the existing generative models due to the explicit adoption of discriminative models.The feature selection and fusing strategy embedded in the boosting algorithm are more efficient than those in these generative model learning algorithms.This is due to two reasons:(1)The loss func-tion in boosting is based on classification error.(2)Positive and negative samples are given,and no sampling process is needed in the feature selection stage.Our model is not re-stricted on local cliques.By using both position sensitive Haar features and position insensitive histogram features, the algorithm is shown to be veryflexible and general.The contrastive divergence learning algorithms[2,11]empha-size on using less number of sampling steps to estimate the model parameters.A large body of energy-based classi-fication models[15]are mostly focused on discriminative models.The purpose of hybrid models in[14]is to study different priors over parameters.Our algorithm is also closely related to the self-supervised boosting algorithm by Welling et al.[26].How-ever,our algorithm differs from[26]in several aspects:(1) We derive our generative models from the Bayesian theory and give convergence proof.In this paper,we use boost-ing as discriminative model.But any discriminative mod-els,e.g.SVM,can be applied in our model.[26]is re-stricted on Boltzmann distribution and boosting algorithm.(2)Our model combines a sequence of strong classifiers, whereas[26]focuses on the feature selection for weak clas-sifiers,which makes it more closely related to[3].We di-rectly use boosting for feature selection and fusion.Our model is thus faster than[26]since sampling is not needed in training each weak classifier.(3)We provide many in-sights from both generative and discriminative models.(4) We use existing database and bootstrapping to improve the speed which is not in[26,21,19].In this paper,we show a variety of applications including texture modeling and classification,non-photo-realistic ren-dering,learning image statistics/denoising,and face model-ing.The framework handles both homogeneous patterns, e.g.textures,and inhomogeneous patterns,e.g.faces,with nearly an identical parameter setting for all the tasks in the learning stage.2.Generative vs.discriminative modelsFigure2.models.els focus ongenerativeLet x be data vector and y∈{−1,+1}its label,indicat-ing either a negative or a positive sample.In a multi-class problem with n classes y is in{1,...,n};in this paper,we focus on two-class models.Given input data point x,a dis-criminative model computes p(y|x),the probability of x be-ing positive or negative.Of course we only need to compute p(y=+1|x)since p(y=−1|x)=1−p(y=+1|x).A generative model,on the other hand,often captures the generation process of x by modeling p(x|y=+1)and p(x|y=−1).1Figure(2)gives an illustration of discrim-inative model p(y|x)and generative model p(x|y).As we can see,discriminative models are mostly focused on how well they can separate the positives from the negatives.A sample far from the decision boundary in the positive re-gion may not look like a positive sample at all.But a dis-criminative model will give a high probability to it being positive.Generative models try to understand the basic for-mation of the individual classes,and thus,carry richer in-formation than discriminative models.Given the prior p(y), one can always derive discriminative models p(y=+1|x) from generative models based on Bayes rule byp(y=+1|x)=p(x|y=+1)p(y=+1)y∈{−1,+1}p(x|y)p(y).(1)However,generative models are much harder to learn than discriminative models,and often,one makes simplified as-sumptions about the data formation,e.g.orthogonal basis in PCA.It has been shown that AdaBoost algorithm and its varia-tions[6]are approaching logistical regression[7]accordingtop(y|x)=exp{Tt=1αt yh t(x)}yexp{Tt=1αt yh t(x)},(2)where h t is a weak classifier.At each step,AdaBoost selects h t from a set of candidate classifiers and estimatesαt by minimizing an exponential loss function.Interestingly,generative models in[3,21,11]estimate a similar exponential function byp(x|y=+1)=exp{−Tt=1λt H t(x)}xexp{−Tt=1λt H t(x)},(3)where H t(x)is a feature of x.As we can see,both eqn.(2)and eqn.(3)have a fea-ture selection stage and a parameter estimation procedure. However,it is much easier to learn eqn.(2)than eqn.(3) because the normalization term in the discriminative model is on y∈{−1,+1}whereas the generative models requires integrating out over all possible x in the data space.3.Learning frameworkIn this section,we show how to use discriminative mod-els to derive generative models.For the remainder of this 1In the literature,one also uses p(x,y)to denote a generative model.paper,the vector x represents an image patch.Our frame-work,however,is applicable to other problems such as shape,text,and medical data modeling.3.1.From discriminative to generative modelsOften,a positive class represents a pattern of interest and a negative class represents the background patterns.Thus, our goal is to learn a generative model p(x|y=+1).Rear-ranging Eqn.(1)givesp(x|y=+1)=p(y=+1|x)p(y=−1)p(y=−1|x)p(y=+1)p(x|y=−1).(4)For notational simplicity,we assume equal priors(p(y= +1)=p(y=−1)).p(x|y=+1)=p(y=+1|x)p(y=−1|x)p(x|y=−1).(5)The above equation says that a generative model for the positives p(x|y=+1)can be obtained from the discrim-inative model p(y|x)and a generative model p(x|y=−1) for the negatives.For clarity,we now refer to the distribu-tion p(x|y=−1)=p r(x)as a reference distribution and call a set of samples drawn from p r(x)pseudo-negatives. We havep(x|y=+1)=p(y=+1|x)p(y=−1|x)p r(x).(6)A trivial observation is that p(x|y=+1)=p r(x)when p(y=+1|x)=p(y=−1|x).This is easy to under-stand.The positive and pseudo-negative samples are from the same distribution when a perfect classifier cannot tell them apart.However,learning p(x|y=+1)in eqn.(6)is a chal-lenging task since we need pseudo-negative samples cover the entire space of x.We can only learn an approximated discriminative model q(y|x)∼p(y|x)on a given set of positives and a limited number of pseudo-negatives sam-pled from p r(x).Fig.(2)shows an illustration.Our ba-sic strategy is to learn an approximated p(x|y=+1)and then plugged it back into the right side of eqn(6).Since p(x|y=+1)will be used to draw negatives,we write it in the form of p r as well to make it less confusing.Next,we give detailed explanations.Let p r1(x)be an initial reference model,e.g.,a database of natural images in which every image patch in every im-age is a sample.We define:p r1(x)=β1|DB|x l∈DBδ(x−x l)+(1−β)U(x),(7)where DB includes all image patches in the database,|DB| is the size of the set,U(x)is the uniform distribution,andFigure3.An illustration of the learning algorithm.The left mostfigure shows a target distribution from which we want to learn a generative model.The top leftfigure shows a reference distribution used,which is a uniform distribution.At each stage,samples are bootstrapped from the reference distribution and used as pseudo-negatives.The right most figure shows thefinal generative model learned.Points shown in cross are samples drawn from thefinal model.They are overlayed with the training set.δis the indicator function.In case DB is not available,wesetβ=0and p r1(x)=U(x).(Drawing fair samples from p r1(x)isstraightforward since it is a simple mixture model.Evaluating p r1(x)is more time-consuming.However,it is not necessary to compute p r1(x)if the same referencedistribution is used in9.)Let S P be a set of samples from which we want to learna generative model.We randomly draw a subset of pseudo-negatives,S N1,w.r.t to p r1(x)to train a classifier q based onS N1and S P.Thus,we obtain an updated generative modelp r2(x)byp r2(x)=1Z1q1(y=+1|x)q1(y=−1|x)p r1(x),(8)where Z1=q1(y=+1|x)q1(y=−1|x)p r1(x)dx.Note that Z1=1ifq1(y|x)=p r1(y|x).We compute Z1using Monte Carlo technique[17]based on S N1,which is a set of fair samples. (This is an approximation and in practice,it is not critical for the overall model.) Given p r2(x),if we plug it back to the right side of eqn.(8)to replace p r1(x),we can compute p r3(x)in an iden-tical manner.Repeating the procedure n times,we getp r n+1(x)=nk=11Z kq k(y=+1|x)q k(y=−1|x)p r1(x),(9)where q k(y=+1|x)is the discriminative model learned by the k th classifier.If a boosting algorithm is adopted,eqn.(9) becomesp r n+1(x)=nk=11Z kexp{2αktth kt(x)}p r1(x).(10)Our goal is to havep r n+1(x)→p(x|y=+1),when the set of pseudo-negatives sampled from p r n+1(x)are indistinguishable from the training positive set.Theorem1KL[p+(x)||p r n+1(x)]≤KL[p+(x)||p r n(x)] where KL denotes the Kullback-Leibler divergence be-tween two distributions,and p(x|y=+1)=p+(x).Proof:KL[p+(x)||p r n(x)]−KL[p+(x)||p r n+1(x)]=p+(x)log1Z nq(y=+1|x)q(y=−1|x)p r n(x)dx−p+(x)log[p r n(x)]dx =p+(x)log1Z ndx+p+(x)logq(y=+1|x)q(y=−1|x)dx=log1Z n+p+(x)logq(y=+1|x)q(y=−1|x)dx≥0(11) .It is easy to see that Z k=q1(y=+1|x)q1(y=−1|x)p r1(x)dx≤1 andp+(x)log q(y=+1|x)q(y=−1|x)dx≥0.Each classifier in aver-age makes a better-than-random prediction.This theorem shows that p r n+1(x)converges to p(x|y=+1)by combin-ing a sequence of discriminative models,and the conver-gence rate depends on the classification error at each step.We make several interesting observations from eqn.(10) w.r.t.eqn.(3)and eqn.(2).Compared to eqn.(3):the discriminative power of a strong classification model,e.g. boosting,is directly used;the p r1(x)term can be droppedif we want to compare different learned generative models, e.g.different texture patterns,since they share the same reference pared to eqn.(2):the negative samples are not always given and our algorithm is able to gradually invent new pseudo-negative samples.Note that we assume enough positive samples are representative for the true distribution.When the number of positives is lim-ited,our model may overfit the data.3.2.Sampling strategiesOne key problem in our learning framework is to drawfair samples w.r.t.p rk(x)as pseudo-negatives in learning. Next,we discussfive sampling strategies.A general prin-ciple is to avoid sampling x from scratch since sampling isoften a time-consuming task.It is worth to mention that some sampling strategies mentioned below,e.g.ICM and constraint sampling,will not generate fair samples.How-ever,we found that,in practice,getting difficult samples allows the algorithm converge faster than fair samples.We will study more efficient sampling methods in the future. BootstrappingAt early stages of the learning process,when k is small,we bootstrap pseudo-negatives directly from the existing im-age database.This is similar to the cascade strategies used in[22],except that we are using a soft probability here. Fig.(6)shows some pseudo-negative samples bootstrapped from a database at different stages.As we can see,pseudo-negatives become increasingly similar to the training posi-tives.After several rounds,all the samples in the database receive a low probability.We are forced to use a sampling scheme to invent more pseudo-negatives.Gibbs samplingThe objective of the sampling stage is to draw fair samples w.r.t.p r n+1(x)in eqn.(10).In the experiments reported in this paper,each sample x is an image patch of size23×23.To speed up the sampling process,it usually starts from pseudo-negatives used in the previous stage.For each pixel (i,j)in the image patch,we computep r n+1(x(i,j)=v,x(Λ/(i,j))|y=+1),∀v,(12)and randomly assign value v to pixel(i,j)accordingly. Gibbs sampler[8]is used here.The potential function is based on all the weak classifiers h which make decision on both local and global information about x.Typically,sev-eral sweeps are performed to sample values for all the pixels in x.Iterated conditional modesWe may use the Iterated Conditional Modes(ICM)[17] method to speed up the Gibbs sampling.That is,instead of sampling the value for each x(i,j)according to eqn.(12), we directly choose the value which maximizes the proba-bility.In practice,we run one sweep of Gibbs sampling followed by4−5sweeps of ICM.Constraints based samplingThe above two sampling schemes need to sample every pixel in x for several sweeps.On the other hand,each weak classifier h kt in eqn.(10)acts as a constraint,and the com-bination of all the h s decide the overall probability of x. Suppose each h is a real value on afilter response of x, h=f(F(x)),instead of performing Gibbs sampling on the x,we can treat all the h s as random variables and run Gibbs sampler based on eqn.(10).Once the values of all the h s are obtained,we use least-square to obtain x from F·x=f−(h),where F denotes the liner transformations corresponding to all the h s,and f−are inverse functions of the weak classifiers in the boosting algorithm.Together with ridge regression,we havex=(F T F+λI)−F T(f−(h)).Ridge regression is used to regularize x since all h s obtainedmay not always be consistent with each other.Figure(10d) shows some images sampled using this method.However,this sampling method is not yet so effective because somesamples satisfying the constraints of the weak classifiers may not be obtained from the closed-form solution.Top-down guided samplingFor some regular patterns,e.g.faces,one can use a PCAmodel(principle component analysis)as a reference distri-bution.It is very fast to draw a sample out of a PCA model, and we then use Gibbs/ICM sampler to perturb the image.We quickly locate a promising sample and use Gibbs/ICMsampler to drag it to a better state in terms of p r n+1(x).This works when we want to obtain a refined model for patternsroughly following a regular distribution.3.3.Outline of the algorithmIn this section,we give the outline of our learning frame-work.We use the boosting algorithm as our discriminativemodel in the rest of this paper.1.Our goal is to learn a generative model for a set of trainings samples,S P.2.Collect an image database,DB.3.Randomly select a sub-set of samples from the database.This is our initialpseudo-negative sample set S N1.If a database is not available,then drawa set of samples of white noise.4.Train a discriminative model using a boosting algorithm.5.Bootstrap data from the database based on eqn.(10).If all the samplesreceive low probability,then draw samples using one of the samplingschemes discussed in Sect.(3.2).6.Go back to step4until the training error for the discriminative modelreaches an upper threshold.Figure4.Outline of the learning algorithm.Fig.(3)shows a toy example for the learning algorithm outlined in Fig.(4).The left mostfigure shows the training samples.The distribution has an irregular shape and it is hard tofit a mixture of Gaussians to it.We use a uniform distribution as our initial reference model and implement a boosting algorithm(GentalBoost[7]).Features are projec-tions to directional lines on the plane,and there are around 500such lines.A sequence of discriminative models grad-ually cut out the space for the generative models.Unlike traditional PCA or mixture of Gaussians approaches,we do not need to make any assumption about the shapes of the target distribution.The algorithm utilizes the intrinsic gen-eralization ability in boosting to achieve accuracy and ro-bustness.4.ExperimentsWe implemented a variety of applications using thelearning framework introduced in this paper,includingtexture modeling/synthesis,texture classification,non-photo-realistic rendering,learning natural image statis-tics/denoising,and face modeling.To allow the frameworkto deal with both homogeneous patterns e.g.textures,and inhomogeneous patterns,e.g.faces,we use two types of fea-tures.The first set of features are Haar wavelets,which are similar to those used in [22].These Haar filters are good at capturing common components appearing at similar lo-cations.It has been shown that the concept of texture ties directly to the histogram of Gabor responses [21].For each image patch,we convolve it with a bank of Gabor wavelets and obtain a histogram for each filter.Typically,each his-togram has 30bins.We use each histogram bin as a fea-ture.The the boosting algorithm weights the importance of every bin and combines them,and eventually constrains the sampled images to have similar histograms to the train-ing images.For an image patch of size 23×23,there are around 35,000features including the Haars and histogram bins.Typically,we use 40features for each boosting strong classifier.It is critical to have real-valued weak classifier in the boosting algorithm to facilitate the sampling process.We use the GentalBoost algorithm [7]in this paper.The discrete AdaBoost algorithm [6]gives hard decision bound-aries for each weak classifier,and thus,it is hard to respond to small changes in the image.For all the experiments re-ported below,we use nearly an identical parameter setting in training.It usually takes a couple of days on a modern PC to train.4.1.TexturemodelingTrainingtexturesSynthesized texturesFigure 5.Examples of texture modeling.The first row shows two training imagesand the second row displays textures synthesized based on learned models.An application for our framework is texture modeling.The basic learning strategy has been discussed in the be-ginning of this section.Fig.(6)shows some intermediate results for modeling a texture shown in Fig.(5a).There are around 25layers of discriminative models learned and we display the pseudo-negative samples for several of them.Not surprisingly,almost all the features selected in the dis-criminative models are histogram features.As we can see,the pseudo-negative images look more and more like the training images after bootstrapping.The third layer showsthe pseudo-negatives sampled based on eqn.(10).Interest-ingly,these pseudo-negatives have passed all the classifica-tion stages up to this layer,yet,they do not look like the training positive samples at all.This echoes one of the ar-guments made in this paper:discriminative models are fo-cused on classifying the positives and pseudo-negatives,and they do not necessarily correspond to the underlying forma-tion of the patterns of interest.With the pseudo-negatives gradually sampled,the model starts to converge and the sampled pseudo-negatives become increasingly faithful to the training pared to the FRAME model [21],our method is more general and flexible.It handles both homogeneous and inhomogeneous patterns.It converges faster due to the use of an image database in the early stage of the learning process and fast parameter estimation in boosting.Also,each discriminative model may combine different bins in different histograms,whereas the FRAME model has to match entire histograms one by one.In this case,our model learns a generative model for an image patch.To synthesize an image like those shown in Fig.(5),we sample patch by patch,but with an overlap of half the size to avoid boundary effect between the patches.Our ap-plications in image analogies and image denoising below use the same strategy.4.2.Texture classificationAs stated in the paper,generative models learned sep-arately by our framework are directly comparable if they share the same reference distribution.Also,the comput-ing and modeling processes are directly combined,and we do not need to design additional data-driven techniques to make inference.Fig.(7)shows a classification result on two textures learned separately.We did not learn the back-ground texture.4.3.Image analogiesThis learning framework allows us to learn very dif-ferent generative models,which can be an artistic style.Fig.(8)shows an example.We use a couple of “Van Gogh”style images in [10]for training and one image is shown in Fig.(8a).We use an identical learning strategy as in tex-ture modeling.A slight difference with the texture synthe-sis is that we add a likelihood term so that rendered image is slightly constrained by the original image.Fig (8c)shows a result rendered by our algorithm and Fig (8d)displays a re-sult using the method in [10].Unlike image analogies [10]where a pair of images are required for learning a mapping function,we directly learn a generative model (style)from a set of training images.4.4.Learning natural image statisticsUsing the same algorithm,we can learn natural image statistics.Our positive training images are from the Berke-ley dataset [18].The training process is the same as in the texture modeling and image analogies cases.However,the initial negatives samples are sampled from white noise.。
TransverseSpinPh...
Transverse Spin Physics:Recent DevelopmentsFeng Yuan1,2∗1-Nuclear Science Division,Lawrence Berkeley National Laboratory,Berkeley,CA94720,USA 2-RIKEN BNL Research Center,Brookhaven National Laboratory,Upton,NY11973,USATransverse-spin physics has been very active and rapidly developing in the last fewyears.In this talk,I will briefly summarize recent theoretical developments,focusingon the associated QCD dynamics in transverse spin physics.There have been strong experimental interests on transverse spin physics around the world,from the deep inelastic scattering experiments such as the HERMES collaboration at DESY,SMC at CERN,and Hall A and CLAS at JLab,the proton-proton collider experiment from RHIC at Brookhaven,and the very relevant e+e−annihilation experiment from BELLE at KEK.One of the major goals in transverse spin physics is to study the quark transversity distribution,the last unknown leading-twist quark distribution in nucleon.As discussed by several talks in this conference,we can study the quark transversity distributions from many processes[1,2,3,4,5],such as the double transverse spin asymmetry in Drell-Yan lepton pair production in pp collision,single hadron and two hadron production in semi-inclusive deep inelastic scattering,and other processes.We are now starting to have afirst glimpse about the quark transversity distribution from the experiments(see from example[5]).Besides the quark transversity distribution,the transverse spin physics also opened a new window to explore the partonic structure of nucleon,the so-called transverse momen-tum dependent(TMD)parton distributions[4].TMD parton distribution is an exten-sion to the usual Feynman parton distributions.These distributions allow us to study the three-dimension picture of partons inside the nucleon,and they are also closely related to the generalized parton distributions[6]and the parton orbital angular momenta.Es-pecially,the single transverse spin asymmetry(SSA)phenomena in high energy hadronic processes have attracted many theoretical and experimental investigations.The SSA is defined as the asymmetry when one of the hadrons’transverse spin isflipped,A N∼(dσ(S⊥)−dσ(−S⊥))/(dσ(S⊥)−dσ(−S⊥)).It has been a great theoretical challenge in the understanding of these phenomena.This is because the leading partonic contribution to the SSA vanish in the leading order,whereas the experimental observation show that these SSAs are in tens of percentage in the forward scattering of the polarized nucleon.Recent theoretical developments have made great progress in the exploration of the underlying physics for the single spin phenomena.It is impossible to cover all these exciting physics in this short talk.Rather,I would like to focus on one important subject,i.e.,the nontrivial QCD dynamics associated with transverse spin physics:the QCD factorization, and the universality of the parton distributions and fragmentation functions.Among those TMD parton distributions and fragmentation functions,two functions have been mostly discussed:the Sivers quark distribution and the Collins fragmentation func-tion.The Sivers quark distribution represents a distribution of unpolarized quarks in a ∗This work was supported in part by the U.S.Department of Energy under contract DE-AC02-05CH11231.We are grateful to RIKEN,Brookhaven National Laboratory and the U.S.Department of Energy(contract number DE-AC02-98CH10886)for providing the facilities essential for the completion of this work.transversely polarized nucleon,through a correlation between the quark’s transverse mo-mentum and the nucleon polarization vector.The Collins function represents a correlation between the transverse spin of the fragmenting quark and the transverse momentum of the hadron relative to the “jet axis”in the fragmentation process.Although they both belong to the so-called “naive-time-reversal-odd”functions,they do have different universality prop-erties.For the quark Sivers function,because of the initial/final state interaction difference,they differ by signs for the SIDIS and Drell-Yan processes [7,8,9,10].On the other hand,there have been several studies showing that the Collins function is universal between differ-ent processes,primarily in the SIDIS and e +e −annihilation [11,12,13,14],and recently in pp collisions [15].In the following,I will take the example of the Collins contribution to the azimuthal asymmetric distribution of hadrons inside a high energy jet in the transversely polarized pp collision to demonstrate this universality property,p (P A ,S ⊥)+p (P B )→jet (P J )+X →H (P h )+X ,(1)where a transversely polarized proton with momentum P A scatters on another proton with momentum P B ,and produces a jet with momentum P J .The three momenta of P A ,P B and P J form the so-called reaction plane.Inside the produced jet,the hadrons are distributed around the jet axis,where we define transverse momentum P hT relative to the jet axis.The correlation between P hT and the polarization vector S ⊥introduces the Collins contribution to the single spin asymmetry in this process.Figure 1:Gluon exchange diagrams contributions to the Collins asymmetry in pp collisions.The short bars indicate the pole contributions to the phase needed for a non-vanishing SSA.The additional two cuts in (d)cancel out each other.We need to generate a phasefrom the scattering amplitudes tohave a non-vanishing SSA.If thephase comes from the vertex asso-ciated with the fragmenting quarkand the final state hadron,or from the dressed quark propa-gator,it is easy to argue the universality of the Collins func-tion between this process and theSIDIS/e +e −process,because theyare the same.The main issueof the universality discussion con-cerns the extra gluon exchangecontribution between the specta-tor of the fragmentation process and hard partonic part.In Fig.2,we have shown all these inter-actions for a particular partonic channel qq →qq contribution,in-cluding the gluon attachments to the incident quarks (a,c),and final state balancing quark (d)and theinternal gluon propagator (b).The contributing phases of the diagrams in Fig.2come from the cuts through the internal propagators in the partonic scattering amplitudes.In Fig.2,we labeled these cut-poles by short bars in the diagrams.From the calculations,we will find that all these poles come from a cut through the exchanged gluon and the fragmenting quarkin each diagram,and all other contributions either vanish or cancel out each other.For ex-ample,in Fig.2(d),we show two additional cuts,which contribute however opposite to each other and cancel out completely.Therefore,by using the Ward identity at this particular order,thefinal results for all these diagrams will sum up together into a factorized form, where the cross section is written as the hard partonic cross section for q(S⊥)q →q(s⊥)q subprocess multiplied by a Collins fragmentation function.The exchanged gluon in Fig.2 is now attaching to a gauge link from the fragmentation function definition.Similar calcu-lations can be performed for the other two processes SIDIS and e+e−annihilation,and the same Collins function will be observed.This argument can also be extended to two-gluon exchange diagrams[15].The key steps in the above derivation are the eikonal approximation and the Ward iden-tity.The eikonal approximation is valid when we calculate the leading power contributions in the limit of P hT P J.The Ward identity ensure that when we sum up the diagrams with all possible gluon attachments we shall get the eikonal propagator from the gauge link in the definition of the fragmentation function.The most important point to apply the Ward identity in the above analysis is that the eikonal propagator does not contribute to the phase needed to generate a nonzero SSA.This observation is very different from the SSAs associated with the parton distributions, where the eikonal propagators from the gauge link in the parton distribution definition play very important role[4,7,8,9,10,16].It is the pole of these eikonal propagators that contribute to the phase needed for a nonzero SSA associated with the naive-time-reversal-odd parton distributions,which also predicts a sign difference for the quark Sivers function between the SIDIS and Drell-Yan processes.More complicated results have been found for the SSAs in the hadronic dijet-correlation[17,18],where a normal TMD factorization breaks down[19].The reason is that the eikonal propagators from the initial andfinal state interactions in dijet-correlation process do contribute poles in the cross section[18,19]. Because of this,the Ward identity is not applicable,and the standard TMD factorization breaks down,although a modified factorization may be valid if we modify the definition of the TMD parton distributions to take into account all the initial andfinal state interaction effects[17].In particular,there is a sign change between the SSAs in SIDIS and Drell-Yan pro-cesses[7,8],Sivers SSA|DY=−Sivers SSA|DIS.(2) This nontrivial result of the opposite signs between the above two processes will still hold when gluon radiation contributions are taken into account,where the large transverse mo-mentum Sivers function is generated from the twist-three quark-gluon correlation func-tion[20,21].It is of crucial to test this nontrivial QCD predictions by comparing the SSAs in these two processes.The Sivers single spin asymmetry in SIDIS process has been observed by the HERMES collaboration,and the planned Drell-Yan measurement at RHIC and other facility will test this prediction.Another interesting probe for the initial/final state interaction effects is the SSA in heavy quark and antiquark production in hadronic process.Because the heavy quark and antiquark can be detected by their decay products,their SSAs can be measured separately.The heavy quark and antiquark produced in short distance partonic processes will experience different final state interactions with the nucleon spectator due to their different color charges,and therefore the SSAs for heavy quark and antiquark will be different.Detailed calculationsshow that the difference could be as large as a factor of3if the quark-antiquark channel contribution dominates[22].In summary,the universality of the parton distribution and fragmentation functions are very different in the single transverse spin asymmetry.These properties are still under theoretical and experimental investigations.These important physics,together with other exciting features have shown that the transverse spin physics is playing a very important role in the strong interaction physics for hadronic spin physics.We will learn more about QCD dynamics and nucleon structure from these studies.References[1]K.Tanaka,these proceedings.[2]G.Goldstein,these proceedings.[3]M.Radici,these proceedings.[4]P.Mulders,these proceedings.[5] A.Prokudin,these proceedings.[6]T.Teckentrup,these proceedings.[7]S.J.Brodsky,D.S.Hwang and I.Schmidt,Phys.Lett.B530,99(2002);Nucl.Phys.B642,344(2002).[8]J.C.Collins,Phys.Lett.B536,43(2002).[9]X.Ji and F.Yuan,Phys.Lett.B543,66(2002);A.V.Belitsky,X.Ji and F.Yuan,Nucl.Phys.B656,165(2003).[10] D.Boer,P.J.Mulders and F.Pijlman,Nucl.Phys.B667,201(2003).[11] A.Metz,Phys.Lett.B549,139(2002).[12]J.C.Collins and A.Metz,Phys.Rev.Lett.93,252001(2004).[13]L.P.Gamberg,A.Mukherjee and P.J.Mulders,Phys.Rev.D77,114026(2008).[14]L.Gamberg,these proceedings.[15] F.Yuan,Phys.Rev.Lett.100,032003(2008);Phys.Rev.D77,074019(2008).[16] C.Pisano,these proceedings.[17] C.J.Bomhof,P.J.Mulders and F.Pijlman,Phys.Lett.B596,277(2004);Eur.Phys.J.C47,147(2006);JHEP0702,029(2007);A.Bacchetta,C.J.Bomhof,P.J.Mulders and F.Pijlman,Phys.Rev.D72,034030(2005);C.J.Bomhof and P.J.Mulders,arXiv:0709.1390[hep-ph].[18]J.W.Qiu,W.Vogelsang and F.Yuan,Phys.Lett.B650,373(2007);Phys.Rev.D76,074029(2007);W.Vogelsang and F.Yuan,Phys.Rev.D76,094013(2007).[19]J.Collins and J.W.Qiu,Phys.Rev.D75,114014(2007);J.Collins,arXiv:0708.4410[hep-ph].[20]X.Ji,J.W.Qiu,W.Vogelsang and F.Yuan,Phys.Rev.Lett.97,082002(2006);Phys.Rev.D73,094017(2006);Phys.Lett.B638,178(2006).[21] A.Bacchetta,these proceedings.[22] F.Yuan and J.Zhou,arXiv:0806.1932[hep-ph].。
计量经济学中英文词汇对照
Controlled experiments Conventional depth Convolution Corrected factor Corrected mean Correction coefficient Correctness Correlation coefficient Correlation index Correspondence Counting Counts Covaห้องสมุดไป่ตู้iance Covariant Cox Regression Criteria for fitting Criteria of least squares Critical ratio Critical region Critical value
Asymmetric distribution Asymptotic bias Asymptotic efficiency Asymptotic variance Attributable risk Attribute data Attribution Autocorrelation Autocorrelation of residuals Average Average confidence interval length Average growth rate BBB Bar chart Bar graph Base period Bayes' theorem Bell-shaped curve Bernoulli distribution Best-trim estimator Bias Binary logistic regression Binomial distribution Bisquare Bivariate Correlate Bivariate normal distribution Bivariate normal population Biweight interval Biweight M-estimator Block BMDP(Biomedical computer programs) Boxplots Breakdown bound CCC Canonical correlation Caption Case-control study Categorical variable Catenary Cauchy distribution Cause-and-effect relationship Cell Censoring
博世汽车SPC
4th Edition, 07.20053rd Edition dated 06.19942nd Edition dated 05.19901st Edition dated 09.19872005 Robert Bosch GmbHTable of ContentsIntroduction (5)1. Terms for Statistical Process Control (6)2. Planning .........................................................................................................................................................8 2.1 Selection of Product Characteristics .................................................................................................8 2.1.1 Test Variable ........................................................................................................................8 2.1.2 Controllability ......................................................................................................................9 2.2 Measuring Equipment .......................................................................................................................9 2.3 Machinery .........................................................................................................................................9 2.4 Types of Characteristics and Quality Control Charts ......................................................................10 2.5 Random Sample Size ......................................................................................................................11 2.6 Defining the Interval for Taking Random Samples (11)3. Determining Statistical Process Parameters ................................................................................................12 3.1 Trial Run .........................................................................................................................................12 3.2 Disturbances ....................................................................................................................................12 3.3 General Comments on Statistical Calculation Methods ..................................................................12 3.4 Process Average ..............................................................................................................................13 3.5 Process Variation . (14)4. Calculation of Control Limits ......................................................................................................................15 4.1 Process-Related Control Limits ......................................................................................................15 4.1.1 Natural Control Limits for Stable Processes ......................................................................16 4.1.1.1 Control Limits for Location Control Charts .........................................................16 4.1.1.2 Control Limits for Variation Control Charts ........................................................18 4.1.2 Calculating Control Limits for Processes with Systematic Changes in the Average .........19 4.2 Acceptance Control Chart (Tolerance-Related Control Limits) .....................................................20 4.3 Selection of the Control Chart .........................................................................................................21 4.4 Characteristics of the Different Types of Control Charts . (22)5. Preparation and Use of Control Charts ........................................................................................................23 5.1 Reaction Plan (Action Catalog) .......................................................................................................23 5.2 Preparation of the Control Chart .....................................................................................................23 5.3 Use of the Control Chart .................................................................................................................23 5.4 Evaluation and Control Criteria ......................................................................................................24 5.5 Which Comparisons Can be Made? (25)6. Quality Control, Documentation .................................................................................................................26 6.1 Evaluation .......................................................................................................................................26 6.2 Documentation .. (26)7. SPC with Discrete Characteristics ...............................................................................................................27 7.1 General ............................................................................................................................................27 7.2 Defect Tally Chart for 100% Testing . (27)8. Tables (28)9. Example of an Event Code for Mechanically Processed Parts ....................................................................29 9.1 Causes .............................................................................................................................................29 9.2 Action ..............................................................................................................................................29 9.3 Handling of the Parts/Goods ...........................................................................................................30 9.4 Action Catalog .. (30)10. Example of an x -s Control Chart (32)11. Literature (33)12. Symbols (34)Index (35)IntroductionStatistical Process Control (SPC) is a procedure for open or closed loop control of manufacturing processes, based on statistical methods.Random samples of parts are taken from the manufacturing process according to process-specific sampling rules. Their characteristics are measured and entered in control charts. This can be done with computer support. Statistical indicators are calculated from the measurements and used to assess the current status of the process. If necessary, the process is corrected with suitable actions.Statistical principles must be observed when taking random samples.The control chart method was developed by Walter Andrew Shewhart (1891-1967) in the 1920´s and described in detail in his book “Economic Control of Quality of Manufactured Product”, published in 1931.There are many publications and self-study programs on SPC. The procedures described in various publications sometimes differ significant-ly from RB procedures.SPC is used at RB in a common manner in all divisions. The procedure is defined in QSP0402 [1] in agreement with all business divisions and can be presented to customers as the Bosch approach.Current questions on use of SPC and related topics are discussed in the SPC work group. Results that are helpful for daily work and of general interest can be summarized and published as QA Information sheets. SPC is an application of inductive statistics. Not all parts have been measured, as would be the case for 100% testing. A small set of data, the random sample measurements, is used to estimate parameters of the entire population.In order to correctly interpret results, we have to know which mathematical model to use, where its limits are and to what extent it can be used for practical reasons, even if it differs from the real situation.We differentiate between discrete (countable) and continuous (measurable) characteristics. Control charts can be used for both types of characteristics.Statistical process control is based on the concept that many inputs can influence a process.The “5 M´s” – man, machine, material, milieu, method – are the primary groups of inputs. Each “M” can be subdivided, e.g. milieu in temperature, humidity, vibration, contamination, lighting, ....Despite careful process control, uncontrolled, random effects of several inputs cause deviation of actual characteristic values from their targets (usually the middle of the tolerance range).The random effects of several inputs ideally result in a normal distribution for the characteristic.Many situations can be well described with a normal distribution for SPC.A normal distribution is characterized with two parameters, the mean µ and the standard deviation σ.The graph of the density function of a normal distribution is the typical bell curve, with inflection points at σµ− and σµ+.In SPC, the parameters µ and σ of the population are estimated based on random sample measurements and these estimates are used to assess the current status of the process.1. Terms for Statistical Process ControlProcessA process is a series of activities and/or procedures that transform raw materials or pre-processed parts/components into an output product.The definition from the standard [3] is: “Set of interrelated or interacting activities which trans-forms inputs into outputs.”This booklet only refers to manufacturing or assembly processes.Stable processA stable process (process in a state of statistical control) is only subject to random influences (causes). Especially the location and variation of the process characteristic are stable over time (refer to [4])Capable processA process is capable when it is able to completely fulfill the specified requirements. Refer to [11] for determining capability indices. Shewhart quality control chartQuality control chart for monitoring a parameter of the probability distribution of a characteristic, in order to determine whether the parameter varies from a specified value.SPCSPC is a standard method for visualizing and controlling (open or closed loop) processes, based on measurements of random samples.The goal of SPC is to ensure that the planned process output is achieved and that corresponding customer requirements are fulfilled.SPC is always linked to (manual or software supported) use of a quality control chart (QCC). QCC´s are filled out with the goal of achieving, maintaining and improving stable and capable processes. This is done by recording process or product data, drawing conclusions from this data and reacting to undesirable data with appropriate actions.The following definitions are the same as or at least equivalent to those in [6].Limiting valueLower or upper limiting valueLower limiting valueLowest permissible value of a characteristic (lower specification limit LSL)Upper limiting valueHighest permissible value of a characteristic (upper specification limit USL)ToleranceUpper limiting value minus lower limiting value:LSLUSLT−=Tolerance rangeRange of permissible characteristic values between the lower and upper limiting valuesCenter point C of the tolerance rangeThe average of the lower and upper limiting values:2LSLUSL C +=Note: For characteristics with one-sided limits (only USL is specified), such as roughness (Rz), form and position (e.g. roundness, perpen-dicularity), it is not appropriate to assume 0=LSL and thus to set 2/USLC= (also refer to the first comment in Section 4.1.1.1).PopulationThe total of all units taken into considerationRandom sampleOne or more units taken from the population or from a sub-population (part of a population)Random sample size nThe number of units taken for the random sample Mean (arithmetic)The sum of theix measurements divided by the number of measurements n:∑=⋅=niixnx11Median of a sampleFor an odd number of samples put in order from the lowest to highest value, the value of the sample number (n+1)/2. For an even number of samples put in order from the lowest to highest value, normally the average of the two samples numbered n/2 and (n/2)+1. (also refer to [13])Example: For a sample of 5 parts put in order from the lowest to the highest value, the median is the middle value of the 5 values.Variance of a sampleThe sum of the squared deviations of the measurements from their arithmetic mean, divided by the number of samples minus 1:()∑=−⋅−=niixxns12211Standard deviation of a sampleThe square root of the variance:2ss=RangeThe largest individual value minus the smallest individual value:minmaxxxR−=2. PlanningPlanning according to the current edition of QSP 0402 “SPC”, which defines responsibilities. SPC control of a characteristic is one possibility for quality assurance during manufacturing and test engineering.2.1 Selection of Product CharacteristicsSpecification of SPC characteristics and their processes should be done as early as possible (e.g. by the simultaneous engineering team). They can also, for example, be an output of the FMEA.This should take• Function,• Reliability,• Safety,•Consequential costs of defects,•The degree of difficulty of the process,• Customer requests, and•Customer connection interfaces, etc.into account.The 7 W-questions can be helpful in specifying SPC characteristics (refer to “data collection” in “Elementary Quality Assurance Tools” [8]): Example of a simple procedure for inspection planning:Why do I need to know what, when, where and how exactly?How large is the risk if I don’t know this? Note: It may be necessary to add new SPC characteristics to a process already in operation. On the other hand, there can be reasons (e.g. change of a manufacturing method or intro-duction of 100% testing) for replacing existing SPC control with other actions.SPC characteristics can be product or process characteristics.Why?Which or what? Which number or how many?Where? Who?When?With what or how exactly?2.1.1 Test VariableDefinition of the “SPC characteristic”, direct or indirect test variable. Note: If a characteristic cannot be measured directly, then a substitute characteristic must be found that has a known relationship to it.2.1.2 ControllabilityThe process must be able to be influenced (controlled) with respect to the test variable. Normally manufacturing equipment can be directly controlled in a manner that changes the test variable in the desired way (small control loop). According to Section 1, “control” in the broadest sense can also be a change of tooling, machine repair or a quality meeting with a supplier to discuss quality assurance activities (large control loop).2.2 Measuring EquipmentDefinition and procurement or check of the measuring equipment for the test variable.Pay attention to:• Capability of measuring and test processes, • Objectiveness,• Display system (digital),• Handling. The suitability of a measurement process for the tested characteristic must be proven with a capability study per [12].In special cases, a measurement process with known uncertainty can be used (pay attention to [10] and [12]).Note: The units and reference value must correspond to the variables selected for the measurement process.2.3 MachineryBefore new or modified machinery is used, a machine capability study must be performed (refer to QSP0402 [1] and [11]). This also applies after major repairs.Short-term studies (e.g. machine capability studies) register and evaluate characteristics of products that were manufactured in one continuous production run. Long-term studies use product measurements from a longer period of time, representative of mass production. Note: The general definition of SPC (Section 1) does not presume capable machines. However, if the machines are not capable, then additional actions are necessary to ensure that the quality requirements for manufactured products are fulfilled.2.4 Types of Characteristics and Control Charts This booklet only deals with continuous anddiscrete characteristics. Refer to [6] for these andother types of characteristics.In measurement technology, physical variables are defined as continuous characteristics. Counted characteristics are special discrete characteristics. The value of the characteristic is called a “counted value”. For example, the number of “bad” parts (defective parts) resulting from testing with a limit gage is a counted value. The value of the characteristic (e.g. the number 17, if 17 defective parts were found) is called a “counted value”.SPC is performed with manually filled out form sheets (quality control charts) or on a computer.A control chart consists of a chart-like grid for entering numerical data from measured samples and a diagram to visualize the statistical indices for the process location and variation calculated from the data.If a characteristic can be measured, then a control chart for continuous characteristics must be used. Normally the sx− chart with sample size 5=n is used.2.5 Random Sample SizeThe appropriate random sample size is a compromise between process performance, desired accuracy of the selected control chart (type I and type II errors, operation characteristic) and the need for an acceptable amount of testing. Normally 5=n is selected. Smaller random samples should only be selected if absolutely necessary.2.6 Defining the Interval for Taking Random SamplesWhen a control chart triggers action, i.e. when the control limits are exceeded, the root cause must be determined as described in Section 5.4, reaction to the disturbance initiated with suitable actions (refer to the action catalog) and a decision made on what to do with the parts produced since the last random sample was taken. In order to limit the financial “damage” caused by potentially necessary sorting or rework, the random sample interval – the time between taking two random samples – should not be too long.The sampling interval must be individually determined for each process and must be modified if the process performance has permanently changed.It is not possible to derive or justify the sampling interval from the percentage of defects. A defect level well below 1% cannot be detected on a practical basis with random samples. A 100% test would be necessary, but this is not the goal of SPC. SPC is used to detect process changes.The following text lists a few examples of SPC criteria to be followed.1. After setup, elimination of disturbances orafter tooling changes or readjustment, measure continuously (100% or with randomsamples) until the process is correctly centered (the average of several measure-ments/medians!). The last measurements canbe used as the first random sample for furtherprocess monitoring (and entered in the control chart). 2. Random sample intervals for ongoingprocess control can be defined in the following manner, selecting the shortest interval appropriate for the process.Definition corresponding to the expected average frequency of disturbances (as determined in the trial run or as is knownfrom previous process experience).Approximately 10 random samples within this time period.Definition depending on specified preventivetooling changes or readjustment intervals.Approximately 3 random samples within thistime period.Specification of tooling changes or readjust-ment depending on SPC random samples.Approximately 5 random samples within theaverage tooling life or readjustment interval.But at least once for the production quantitythat can still be contained (e.g. delivery lot,transfer to the next process, defined lots forconnected production lines)!3. Take a final random sample at the end of aseries, before switching to a different producttype, in order to confirm process capabilityuntil the end of the series.Note: The test interval is defined based on quantities (or time periods) in a manner that detects process changes before defects are produced. More frequent testing is necessary for unstable processes.3. Determining Statistical Process Parameters3.1 Trial RunDefinition of control limits requires knowledge or estimation of process parameters. This is determined with a trial run with sampling size and interval as specified in Sections 2.5 and 2.6. For an adequate number of parts for initial calculations, take a representative number of unsorted parts, at least 25=m samples (with n = 5, for example), yielding no fewer than 125 measured values. It is important to assess the graphs of the measured values themselves, the means and the standard deviations. Their curves can often deliver information on process performance characteristics (e.g. trends, cyclical variations).3.2 DisturbancesIf non-random influences (disturbances) occur frequently during the trial run, then the process is not stable (not in control). The causes of the disturbances must be determined and elimi-nated before process control is implemented (repeat the trial run).3.3 General Comments on Statistical Calculation MethodsComplicated mathematical procedures are no longer a problem due to currently available statistics software, and use of these programs is of course allowed and widespread (also refer to QSP0402 [1]).The following procedures were originally developed for use with pocket calculators. They are typically included in statistics programs.Note: Currently available software programs allow use of methods for preparing, using and evaluation control charts that are better adapted to process-specific circumstances (e.g. process models) than is possible with manual calculation methods. However, this unavoidably requires better knowledge of statistical methods and use of statistics software. Personnel and training requirements must take this into account.Each business division and each plant should have a comprehensively trained SPC specialist as a contact person.Parameter µ is estimated by:Example (Section 10): samplesof number valuesx the of total mxx mj j===∑=1ˆµ3622562862662.......x ˆ=+++==µor:samplesof number mediansthe of total mxx m j j===∑=1~~ˆµ46225626363....x ~ˆ=+++==µIf µˆ significantly deviates from the center point C for a characteristic with two-sided limits, then this deviation should be corrected by adjusting the machine.Parameter σ is estimated by:Example (Section 10):a) ∑=⋅=m j j s m 121ˆσ41125552450550222.......ˆ=+++=σsamplesof number variancesthe of total =σˆNote: s =σˆ is calculated directly from 25 individual measurements taken from sequential random samples (pocket calculator).or b) na s=σˆ, where27125552450550.......s =+++=samplesof number deviationsdard tan s the of total ms s mj j==∑=1351940271...a s ˆn ===σnn a3 0.89 5 0.94 See Section 8, Table 1 7 0.96 for additional valuesor c) ndR =σˆ, with96225611....R =+++= samplesof number rangesthe of total mR R mj j==∑=1271332962...d R ˆn ===σn n d3 1.69 5 2.33 See Section 8, Table 1 7 2.70 for additional values Note: The use of table values n a and n d pre-supposes a normal distribution!Some of these calculation methods were originally developed to enable manual calculation using a pocket calculator. Formula a) is normally used in currently available statistics software.4. Calculation of Control Limits4.1 Process-Related Control LimitsThe control limits (lower control limit LCL andupper control limit UCL) are set such that 99% of all the values lie within the control limits in the case of a process which is only affected by random influences (random causes).If the control limits are exceeded, it must there-fore be assumed that systematic, non-random influences (non-random causes) are affecting the process.These effects must be corrected or eliminated by taking suitable action (e.g. adjustment).Relation between the variance (standard deviation x σ) of the single values (original values, individuals) and the variance (standard deviation x σ) of the mean values: nxx σσ=.4.1.1 Natural Control Limits for Stable Processes4.1.1.1 Control Limits for Location Control Charts (Shewhart Charts)For two-sided tolerances, the limits for controlling the mean must always be based on the center point C. Note: C is replaced by the process mean x =µˆ for processes where the center point C cannot be achieved or for characteristics with one-sided limits.* Do not use for moving calculation of indices!Note: Use of the median-R chart is onlyappropriate when charts are manually filled out, without computer support.n*A E C n c'E EE E3 1.68 1.02 1.16 2.93 1.73 5 1.230.59 1.20 3.09 1.337 1.020.44 1.21 3.19 1.18Estimated values µˆ and σˆ are calculated per Sections 3.4 and 3.5.Refer to Section 8 Table 2 for additional values.Comments on the average chart: For characteristics with one-sided limits (or in general for skewed distributions) and small n , the random sample averages are not necessarily normally distributed. It could be appropriate to use a Pearson chart in this case. This chart has the advantage compared to the Shewhart chart that the control limits are somewhat wider apart. However, it has the disadvantage that calculation of the control limits is more complicated, in actual practice only possible on the computer.Control charts with moving averagesThe x chart with a moving average is a special case of the x chart. For this chart, only single random samples are taken.n sample measurements are formally grouped as a random sample and the average of these n measurements is calculated as the mean.For each new measurement from a single random sample that is added to the group, the first measurement of the last group is deleted, yielding a new group of size n , for which the new average is calculated.Of course, moving averages calculated in this manner are not mutually independent. That is why this chart has a delayed reaction to sudden process changes. The control limits correspond to those for “normal” average charts:σˆn .C LCL ⋅−=582 σˆn.C UCL ⋅+=582Calculate σˆ according to Section 3.5 a)Control limits for )3(1=n :σˆ.C LCL ⋅−=51 σˆ.C UCL ⋅+=51Example for )3(1=n :3 74 741.x = 3 7 4 9 762.x = 3 7 4 9 2 053.x = 3 7 4 9 2 8 364.x =This approach for moving sample measurements can also be applied to the variation, so that an s x − chart with a moving average and moving standard deviation can be used.After intervention in the process or process changes, previously obtained measurements may no longer be used to calculate moving indices.4.1.1.2 Control Limits for Variation Control ChartsThe control limits to monitor the variation (depending on n ) relate to σˆ and s and like-wise R (= “Central line”).s charta) generally applicable formula(also for the moving s x − chart)Example (Section 10):σˆB UCL 'Eob⋅= 62351931...UCL =⋅=σˆB LCL 'Eun ⋅= 30351230...LCL =⋅=b) for standard s x − chartNote: Formula a) must be used in the case ofmoving s calculation. Calculation of σˆ per Section 3.5 a).s B UCL *Eob ⋅= 62271052...UCL =⋅=s B LCL *Eun ⋅=30271240...LCL =⋅=R chartR D UCL Eob ⋅=2696212...UCL =⋅=R D LCL Eun ⋅=70962240...LCL =⋅=Tablen 'Eun B 'Eob B *Eun B *Eob B Eun D Eob D3 5 70.07 0.23 0.34 2.30 1.93 1.76 0.08 0.24 0.35 2.60 2.05 1.88 0.08 0.24 0.34 2.61 2.10 1.91See Section 8, Table 2 for further values4.1.2 Calculating Control Limits for Processes with Systematic Changes in the AverageIf changes of the mean need to be considered as a process-specific feature (trend, lot steps, etc.) and it is not economical to prevent such changes of the mean, then it is necessary to extend the “natural control limits”. The procedure for calculating an average chart with extended control limits is shown below.The overall variation consists of both the “inner” variation (refer to Section 3.5) of the random samples and of the “outer” variation between the random samples.Calculation procedure Control limits for the mean。
Force distribution method and apparatus for neonat
专利名称:Force distribution method and apparatus forneonates at risk of cranial molding发明人:Christopher Loring Gilmer,Thomas CraigRoberts,Samuel Alexander,George MartinHutchinson,Daniel V. Mendez申请号:US14504404申请日:20141001公开号:US09173763B2公开日:20151103专利内容由知识产权出版社提供专利附图:摘要:A force distribution apparatus and method are presented. Variousembodiments of the disclosed apparatus include a plurality of layers configured and oriented to be deployed on a subject in a manner that disperses forces and lowers peak pressures experienced by the subject when resting on a surface, which tends to minimize risks of deformation and local ischemia. An innovative combination of novel construction methods and material selections produce an apparatus that possesses an inherent three-dimensional shape despite being built from essentially flat components, while also retaining an ability to effectively distribute forces and reduce pressures.申请人:Invictus Medical, Inc.地址:San Antonio TX US国籍:US代理机构:Reinhart Boerner Van Deuren s.c.更多信息请下载全文后查看。
基于三支决策的空中目标敌我识别方法
第42卷第1期2020年2月探测与控制学报Journal of Detection&ControlVol.42No.1Feb.2020基于三支决策的空中目标敌我识别方法庞梦洋】,索中英】,郑万泽2,黄林包壮壮1(1.空军工程大学基础部,陕西西安710051'.空军工程大学学术科研处,陕西西安710051)扌商要:针对空中目标敌我识别过程中由于获取信息的不确定性及不完全性而导致决策识别出现的错分、误判及选择性判断问题,提出了基于三支决策的空中目标敌我识别方法。
该方法采用多传感器信息融合技术,在决策层级引入三支决策理论,遵循贝叶斯最小风险准则,构造了基于三支理论的敌我识别模型。
应用实例表明,该识别方法能够减小和规避错误决策带来的风险与损失,提高最终决策的准确性和实时性,切合现代战场复杂情况对时间复杂度的要求,更能适合战场的需要。
关键词:空中目标识别;传感器融合;贝叶斯准则;三支决策中图分类号:TN911文献标识码:A文章编号:1008-1194(2020)01-0115-06Air Target Identification Method Based on Three-way Decisions PANG Mengyang1,SUO Zhongying1,ZHENG Wanze2,HUANG Lin1,BAO Zhuangzhuang1(1.Department of Basic,Air force Engineering University,Xi'an710051,China;2.Academic Research Office,Air force Engineering University,Xi'an710051,China) Abstract:Aiming at the problem of misclassification,misjudgment and selectivity judgment in the process of air target identification caused by the uncertainty and incompleteness of information acquisition,an air target identi-fic3tionmethodb3sedonthree-w3ydecisionstheoryw3sproposed.Thismethod3doptedmulti-sensorinform3-tion fusion technology,introduced three-way decisions theory at the decision-making level,and constructed the identification friend or foe model based on three decision theories according to bayesian minimum risk criterion. The application example showed that the identification method could reduce and avoid the risk and loss brought bythewrongdecision%improvedtheaccuracyandrea-timeofthefina'decision%mettherequirementsoftime complexity in the complex situation of modern battlefield.Key words:identification of air target;sensor fusion;Bayes rules;three-way decisions0引言敌我识别是通过各种技术和手段获取未知目标的信息,结合专用设备、系统,在所属作战时空内,对目标进行敌我属性识别的过程。
XSL675型旋扣水龙头提环有限元分析
照 API Spec 8C 规 范要 求和 材 料 的 抗 拉 性 能试 验 数 据 ,确 定 本 构 关 系 和 加 载 方 式 。 运 用 ABAQUS
有 限元分 析软件 ,计 算 了提 环在 额 定 载荷 和 超 负荷 条 件 下 的应 力 状 况 ,分析 了提 环 的应 力 分 布规
律 。 并 根 据 API Spec 8C规 范要 求 设 置 危 险 下 限 和 安 全 上 限 ,显 示 了提 环 的 危 险 区域 和 安 全 区域 ,
Finite Elem ent Analysis on XSL675 Swivel Bail of Oil Drilling Rig FENG Qingdong ,ZH ANG Linhai。,HOU Yu ,FENG Yaodong ,REN Yilei
(1. RG Petro—M achinery(Group) Co.,Ltd.,N anyang 473006,China;
承和下 扶正 轴承 等组 成 ;密 封 部分 由冲 管 总成 和 上 下油封 等组 成 ;旋 扣 部分 由风 动 马达 、齿 轮 、单 向气 控摩擦 离合 器等组 成_3 ]。
旋 扣水 龙头 提环 一 般 呈 U 形 ,其 上 端 (U 形 底 部 )挂 在大 钩上 ,下端耳 环 (柱 销孔 )通过柱 销与水 龙 头壳 体相 连 。水 龙头提 环是游 吊系 统 中的主要 受力 构件 ,承载 着水龙 头 、方 钻杆 、钻杆 等全部 重力 ,其力 学性 能直接 关系 到水龙 头 的工 作安 全性 ]。水龙 头
2. School of M echanical and Automotive Engineering,N anyang Institute of Technology,Nanyang 473004,China)
Force Dimension Omega.X haptic device 用户手册说明书
USER MANUAL omega.x haptic device version 1.9Force Dimension Switzerland2summaryThe purpose of this document is>to describe the setup of the omega.x haptic device>to describe the installation of the software drivers and the Force Dimension SDK>to describe the operation modes of the omega.x haptic deviceglossarySDK refers to the Software Development Kit (SDK) for all Force Dimension products. omega.x refers to the base haptic device shared by the omega.3, omega.6 and omega.7 hap-tic devices. Unless specified, all instructions in this manual apply to all three devicetypes.34table of contents1.system overview 72.important safety instructions 83.setting up the omega.x haptic device 9 3.1unpacking the device 93.2installing the power supply 104.configuring the omega.x under Windows 11 4.1installation description 114.2installing the drivers 115.configuring the omega.x under Linux 12 5.1installing the software 12 5.2installation description 125.3installing the drivers 126.configuring the omega.x under macOS 13 6.1installing the software 13 6.2installation description 136.3installing the drivers 137.operating the omega.x 14 7.1coordinate system 14 7.2operating modes 16 7.3running the Haptic Desk program 187.4running the demonstrations programs 198.technical information - omega.3 219.technical information - omega.6 2310.technical information - omega.7 255671. system overviewfigure 1 – overview of the omega.3 haptic device1. base plate2. control unit3. front arms4. end-effector5. force button6. calibration pole7. calibration pit8. status LED 9. force LED 10. user button 11. power switch 12. power connector 13. USB connector2.important safety instructionsIMPORTANTWHEN USING THIS HAPTIC DEVICE, BASIC SAFETY PRECAUTIONSSHOULD ALWAYS BE FOLLOWED TO REDUCE THE RISKOF FIRE, ELECTRICAL SHOCK, OR PERSONAL INJURY.1.read and understand all instructions2.follow all warnings and instructions marked on your haptic device3.do not use or place your haptic device near water4.place your haptic device securely on a stable surface5.make sure that the workspace of your haptic device is free of objects6.do not overload wall outlets and extension cords as this can result in a risk of fire or electri-cal shock7.switch off your haptic device when it is not in use8.to reduce the risk of electrical shock, do not disassemble your haptic device893. setting up the omega.x haptic deviceThis section describes the different steps to follow to safely setup your omega.x haptic device before use.IMPORTANTPLEASE KEEP THE ORIGINAL PACKAGINGONLY USE THE ORIGINAL PACKAGING DURING STORING OR SHIPPING3.1 unpacking the deviceBefore unpacking the omega.x haptic device, remove the haptic device foam stabilizer and theaccessories foam section located inside the shipping box.figure 2 – view when opening the shipping box10Carefully remove the haptic device and the foam stabilizer from the box, then remove the foam stabilizer from the haptic device.figure 3 – view of the shipping box after removal of the foam stabilizer and accessoriesThe accessories compartment contains the power supply, power and USB cables, as well as the USB flash drive.figure 4 – omega.x accessories3.2 installing the power supplyPlug the power supply into the power connector. For safety purposes you should only operate your omega.x haptic device using the original Force Dimension power supply that came with your hap-tic device controller. Replacement power supplies can be ordered directly from Force Dimension.4.configuring the omega.x under WindowsThe USB driver must be first installed onto your system prior to connecting the omega.x to the computer. To do this, perform the following steps:1.plug the Force Dimension USB flash drive into your Windows computer2.open the \Windows folder on the USB flash drive and select the appropriate \32-bit or \64-bit subfolder according to the operating system version on your computer3.run the installation program and follow its instructions4.1installation descriptionThe installation program creates the following subfolders in:C:\Program Files\Force Dimension\sdk-<version>\bin subfolderThis directory contains the demonstration executables and the DLL files required to run the omega.x software. The required DLL files are also copied to the Windows system folder during the installation.\drivers subfolderThis directory contains the USB drivers required to operate your haptic device.\examples subfolderThis directory contains the demonstration programs. Example applications described in section 7.4 and come with their full source code.\doc subfolderAll documentation files and notices are located in that directory.\manuals subfolderAll hardware user manuals are located in that directory.\lib,\include subfoldersThese directories contain the files required to compile your application with the Force Dimension SDK. Please refer to the on-line programming manual for more information.4.2installing the driversUSB driversThe omega.x requires the Force Dimension USB driver. These drivers are installed automatically, and no additional step is required.5.1installing the softwareThe Force Dimension development folder must be installed onto your system before the omega.x can be used. To do this, perform the following steps:1.plug the Force Dimension USB flash drive into your Linux computer2.extract the sdk-<version>.tar.gz archive for your system architecture from the \Linuxsubfolder to the desired location (typically your home folder) by running the following com-mand within the target folder:tar –zxvf sdk-<version>.tar.gz3.this will create a sdk-<version> development folder in the target location5.2installation descriptionThe development folder contains the following directories:\bin subfolderThis directory contains the demonstration executables and the binary files required to run the omega.x software.\examples subfolderThis directory contains the demonstration programs. Example applications described in section 7.4 and come with their full source code.\doc subfolderAll documentation files and notices are located in this subfolder.\manuals subfolderAll hardware user manuals are located in that directory.\lib,\include subfoldersThese directories contain the files required to compile your application with the Force Dimension SDK. Please refer to the on-line programming manual for more information.5.3installing the driversThe Linux version of the Force Dimension SDK requires the development packages for the libusb-1.0 to be installed on your Linux distribution.IMPORTANTPLEASE NOTE THAT USB ACCESS TO THE HAPTIC DEVICE REQUIRES SUPERUSER PRIVILEDGES ON MOST LINUX DISTRIBUTIONS6.1installing the softwareThe Force Dimension development folder must be installed onto your system before the omega.x can be used. To do this, perform the following steps:1.plug the Force Dimension USB flash drive into your Apple computer2.open the sdk-<version>.dmg file for your version of macOS from the \macOS folder andextract the sdk-<version> folder to the desired location (typically your home folder) 3.this will create a sdk-<version> development folder in the target location6.2installation descriptionThe development folder contains the following directories:\bin subfolderThis directory contains the demonstration executables and the binary files required to run the omega.x software.\examples subfolderThis directory contains the demonstration programs. Example applications described in section 7.4 and come with their full source code.\doc subfolderAll documentation files and notices are located in this subfolder.\manuals subfolderAll hardware user manuals are located in that directory.\lib,\include subfoldersThese directories contain the files required to compile your application with the Force Dimension SDK. Please refer to the online programming manual for more information.6.3installing the driversThe macOS version of the Force Dimension SDK uses Apple’s native USB drivers. No further instal-lation is required.7.operating the omega.x7.1coordinate systembase translationThe position of the center of the end-effector (handle) is expressed in Cartesian coordinate and in IUS (metric) unit. Figure 5 illustrates the coordinate system.The actual origin of the coordinate system (0,0,0) is located on a virtual point situated at the cen-ter of the physical workspace of the haptic device.Z-axisY-axisX-axisfigure 5 – Cartesian coordinate system of the omega.x haptic devicewrist orientationThe omega.6 and omega.7 haptic devices incorporate a rotational wrist. The orientation of the wrist is expressed by a reference frame R wrist which is numerically represented using a 3x3 rotation matrix. This reference frame is expressed in relation to the world coordinate system described in figure 5 and is computed from the angle values returned by the joint sensors mounted of each revolute axis of the wrist.Z-axisY-axisX-axisZ-axisY-axisX-axisfigure 6 – reference frame of the wrist (omega.6 and omega.7 haptic devices)gripper angleThe angular position of the force gripper is returned in either degrees or radian.A positive angle value is returned for right-hand omega.7 haptic devices. A negative angle value is returned for left-hand haptic devices.Angular values closer to zero correspond to configurations where the force gripper is in a closed configuration. Opening of the force gripper increases the magnitude of the angle.7.2operating modesstatus indicatorsThe displays the status of the system:>LED OFF the system is off>LED ON the system is ready>LED FLASHING (fast) the system requires calibration>LED FLASHING (slow) the wrist requires manual calibration (omega.6 or omega.7)While the status LED is ON, it is possible to read the position of the of the end-effector, but no forces can be applied. Forces must be enabled by pressing the force button. When the forces are enabled, the force LED is turned ON. Forces can be disabled by pressing force button again.featurescalibrationCalibration is necessary to obtain accurate, reproducible localization of the end-effector within the workspace of the device. The omega.x is designed in such a way that there can be no drift of the calibration over time, so the procedure only needs to be performed once when the device is powered ON.The calibration procedure consists in placing calibration pole in the dedicated calibration pit. The device detects when the calibration position is reached and the status LED stops flashing. Figure 7 illustrates the calibration procedure. After the initial calibration described above, the LED will stop flashing (omega.3).figure 7 – calibration procedureOn the omega.6 and omega.7, the status LED will blink at a slower frequency, indicating that the wrist is usable but not fully calibrated. To fully calibrate the omega.6 and omega.7 wrists, each of the three rotation axes of the wrist and the grasping axis of the omega.7 must be moved by hand to their respective end-stops positions. When the device has reached all end-stops, the LED stops flashing, and the device is now fully calibrated.Alternatively, an automatic calibration procedure of the omega.x active axes can be performed by software using the Force Dimension SDK, for example by launching the application HapticInit which automatically drives the device throughout its workspace. Please do not touch the device during this automatic calibration procedure. After calibration, the device is ready for normal op-eration.gravity compensationTo prevent user fatigue and to improve dexterity during manipulation, the omega.x features gravity compensation. When gravity compensation is enabled, the weights of the arms and of the end-effector are taken into account and a vertical force is dynamically applied to the end-effector in addition to the desired user force command. Please note that gravity compensation is computed on the host computer, and therefore only gets updated every time a new force command is sent to the haptic device by the application. Gravity compensation is enabled by default and can be dis-abled through the Force Dimension SDK.forcesBy default, and when an application opens a connection to the device, the forces are disabled. Forces can be enabled or disabled at any time by pressing the force button.brakesThe device features electromagnetic brakes that can be enabled through the Force Dimension SDK. These brakes are enabled by default every time the forces are disabled. When the brakes are engaged, a viscous force is created that prevents rapid movement of the end-effector.safety featuresThe omega.x features several safety features designed to prevent uncontrolled application of forces and possible damage to the device. These safety features can be adjusted or disabled via a protected command in the Force Dimension SDK.IMPORTANTPLEASE NOTE THAT THE WARRANTY MAY NOT APPLYIF THE SAFETY FEATURES HAVE BEEN OVERRIDEN.When a connection to the omega.x haptic device is made from the computer, the forces are auto-matically disabled to avoid unexpected behaviors. The user must press the force button to enable the forces. This feature can be bypassed through the Force Dimension SDK.If the control unit detects that the velocity of the end-effector is higher than the programmed se-curity limit, the forces are automatically disabled, and the device brakes are engaged to prevent a possibly dangerous acceleration from the device. This velocity threshold can be adjusted or re-moved through the Force Dimension SDK.Please refer to the on-line programming manual for more information.7.3running the Haptic Desk programThe Haptic Desk application is available as a test and diagnostic program and offers the following capabilities:>list all Force Dimension haptic devices connected to the system>test the position reading of the haptic device in Cartesian coordinates>test all force and torque capabilities of the haptic device>run the auto-calibration procedure>read the haptic device status>read the haptic device encoder sensors individually>read the haptic device user button (if available)figure 8 – Haptic Desk test and diagnostic program7.4running the demonstrations programsTwo demonstration programs can also be used to diagnose the device. The source code and an executable file for each of these demonstration programs are provided in two separate directories named \gravity and \torus.Once the system is setup, we suggest running application gravity to check that everything is working properly and to evaluate your system's performance independently of the graphics ren-dering performance. Application torus will allow you to test the combined performance of hap-tics and graphics rendering.gravity exampleThis example program runs a best effort haptic loop to compensate for gravity. The appropriate forces are applied at any point in space to balance the device end-effector so that it is safe to let go of it. The refresh rate of the haptic loop is displayed in the console every second.figure 9 – gravity exampletorus exampleThe torus example displays an OpenGL scene with haptic feedback.figure 10 – torus examplenote – OpenGL must be installed for your compiler and development environment to compile this example. Please refer to your compiler documentation for more information, or consult 8. technical information - omega.3omega.3workspace translation ∅ 160 mm x L 110mmforces continuous 12.0 Nresolution linear < 0.01 mmdimensions height 270 mmwidth 300 mmdepth 350 mmelectronicsinterface standard USB 2.0rate up to 4.0 KHzpower universal 100V - 240Vsoftwareplatforms Microsoft WindowsLinux all distributionsApple macOSBlackberry QNXWindRiver VxWorkslibraries Haptics SDKRobotics SDKfeaturesergonomics the device can be used with both left and right handsstructure delta-based parallel kinematicsactive gravity compensationcalibration automaticdriftlessuser input 1 user buttonsafety velocity monitoringelectromagnetic damping21229.technical information - omega.6omega.6workspace translation ∅ 160 mm x L 110mmrotation 240 x 140 x 320 degforces continuous 12.0 Nresolution linear < 0.01 mmangular 0.09 degdimensions height 270 mmwidth 300 mmdepth 350 mmelectronicsinterface standard USB 2.0rate up to 4.0 KHzpower universal 100V - 240Vsoftwareplatforms Microsoft WindowsLinux all distributionsApple macOSBlackberry QNXWindRiver VxWorkslibraries Haptics SDKRobotics SDKfeaturesergonomics available in left- and right-hand configurationstructure delta-based parallel kinematicshand-centered rotation movementsdecoupling between translation and rotation movementsactive gravity compensationcalibration automaticdriftlessuser input 1 user buttonsafety velocity monitoringelectromagnetic damping232410.technical information - omega.7omega.7workspace translation ∅ 160 mm x L 110mmrotation 240 x 140 x 180 deggripper 25 mmforces continuous 12.0 Ngrasping ± 8 Nresolution linear < 0.01 mmangular 0.09 deglinear 0.006 mmdimensions height 270 mmwidth 300 mmdepth 350 mmelectronicsinterface standard USB 2.0rate up to 4.0 KHzpower universal 100V - 240Vsoftwareplatforms Microsoft WindowsLinux all distributionsApple macOSBlackberry QNXWindRiver VxWorkslibraries Haptics SDKRobotics SDKfeaturesergonomics available in left- and right-hand configurationstructure delta-based parallel kinematicshand-centered rotation movementsdecoupling between translation and rotation movementsactive gravity compensationcalibration automaticdriftlessuser input 1 simulated button using the force grippersafety velocity monitoringelectromagnetic damping252627noticeThe information in this document is provided for reference only. Force Dimension does not assume any lia-bility arising out of the application or use of the information or product described herein. This document may contain or reference information and products protected by copyrights or patents and does not convey any license under the patent rights of Force Dimension, nor the rights of others.All trademarks or trade names are properties of their respective owners.© Copyright 2020 – Force DimensionAll rights reserved.。
NEC 表达5800服务器系列i型i120Ra-e1 能源节约服务器说明书
NEC Express5800 Server Series i Modeli120Ra-e1NEC Express5800/100 Seriesi120Ra-e1Note:For Linux support, go to the NEC website at /express, or contact your local NEC sales office.Product SpecificationsProcessorsIntel ®Xeon ®Processor L5240 (3GHz/1333MHz FSB/6MB L2 Cache)Intel ® Xeon ® Processor L5215 (1.86GHz/1066MHz/6MB L2 Cache)Intel ® Xeon ® Processor L5410 (2.33GHz/1333MHz/2x6MB L2 Cache)MemoryUp to 24GB (6 x 4GB) of DDR2-667 SDRAM DIMM with ECC, x4 SDDC, and online sparingMaximum storage3.5-inch SAT A : 3TB (3 x 1TB)3.5-inch SAS : 1.35TB (3 x 450GB)**Requires an optional RAID ControllerRemovable mediaOptionExpansion slotsT otal: 2 slots*1 x PCI Express X8 1 x PCI Express X4* By replacing the standard riser card witn an optional Riser Card [N8116-19], a low profile PCI-Express x8 slot and a full height 64bit/100MHz PCI-X slot become available.VideoIntegrated in Server Management Controller (8MB)Network2 x 1000BASE-T/100BASE-TX/10BASE-TPower consumption341VA/335WRedundant power supply–Redundant cooling fan–External interfaces2 x VGA, 1 x keyboard, 1 x mouse, 4 x USB 2.0, 1 x serial 2 x LAN, 1 x management LAN, 2 x videoOperating systemsMicrosoft ® Windows Server ® 2003, Standard/Enterprise Editions (SP1 or later)Microsoft ® Windows Server ® 2003 R2, Standard/Enterprise Editions (x64)Microsoft ® Windows Server ® 2008, Standard/Enterprise (x64) Red Hat ® Linux TMDimension and weight428W x 579D x 43H (1U)* Std: 10.8kg Max: 14.4kg*481W x 615D x 43H (including a stabilizer and protruding objects)Cat. No.E08BSGNEC EXPRESS5800For further information, please contact:• Microsoft and Windows Server are either registered trademarks or trademarks ofMicrosoft Corporation in the United States and/or other countries.• Intel and Xeon are registered trademarks or trademarks of Intel Corporation or its subsidiaries in the UnitedStates and other countries.• Linux is a trademark of Linus Torvalds.• Red Hat is a registered trademark of Red Hat, Inc. in the U.S.• All other products, brands, or trade names used in this document are trademarks or registered trademarksof their respective holders.• Specifications are subject to change without notice.。
port_tutorial_CPW
A microstrip port at left has sufficient surface area for fringing field behavior, while the one at right forces field attachment to the port side walls, even if the surrounding area was designated as a radiation boundary
2
Coplanar Waveguide: Dimensioning
HFSS v8 Training
CPW Dimensions
CPW is generally defined by center strip width w, gap width g, substrate height h, and substrate dielectric material Metal thickness is also important, especially when metal thickness t 0.1w or t 0.1g For FG-CPW, the width of the side grounds, S, must also be considered in port design
hfieldscirculatearoundthecentertraceingcpwfieldlinesmayalsoextendfromthebottomgroundtothecentertrace场有从ground指向中心导带的分布fieldsshouldnotextendstronglytotheportperiphery圆周边缘exceptforthebottomgroundifappropriate不要在波端口边缘处有强分布除了gcpw这种情况因为端口的一个边界是ground且场有从ground指向中心导带的分布characteristicimpedancecpwmodesarequasitemsothethreeavailabledefinitionsofcharacteristicimpedancezpizpvandzvishouldbenearlythesamecpwcharacteristicimpedanceshouldremainrelativelyflatwithrespectto关于frequencyungroundedcpwmodeandgroundedcpwmodehhhfssv8training8cpwwaveports
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
a r X i v :c o n d -m a t /0206062v 1 [c o n d -m a t .s o f t ] 5 J u n 2002Force Distributions in three dimensional compressible granular packsJ.Michael Erikson,Nathan W.Mueggenburg,Heinrich M.Jaeger,Sidney R.NagelThe James Franck Institute and Department of PhysicsThe University of Chicago5640S.Ellis Ave.Chicago,IL 60637(February 1,2008)We present an experimental investigation of the probability distribution of normal contact forces,P (F ),at the bottom boundary of static three dimensional packings of compressible granular mate-rials.We find that the degree of deformation of individual grains plays a large role in determining the form of this distribution.For small amounts of deformation we find a small peak in P (F )below the mean force with an exponential tail for forces larger than the mean force.As the degree of deformation is increased the peak at the mean force grows in height and the slope of the exponential tail increases.PACS numbers:81.05.Rm,83.80.Fg,It is known that forces within a granular material are distributed in a highly inhomogeneous manner [1].The largest interparticle forces are arranged in a network of force chains while other particles are shielded from the external force [2–6].One quantitative way of analyzing the inhomogenaities of these force networks is to measure the probability distribution,P (F ),of normal forces,F ,between neighboring particles.Experiments have shown that under a wide range of paramaters,P (F )at the boundaries of granular packs de-cays exponentially for forces larger than the mean force,[8].An appropriate number was added to the lowest binto account for beads with forces to small to leave a resolv-able mark.Since the bottom layer was crystalline,thetotal number of contacts was known and agreed well withthe number of observed contacts at high applied forces.All forces for a given experimental run were normalizedto the average force for that run and the resulting proba-bility distribution,P(f),of normalized forces,f=F/F,and average deformation are indicated.Error bars representstatistical deviations from multiple experimental realizations.The solid line is afit to an exponential over the large forceregion resulting in slopes listed in table1Bead TypeSoft Rubber1.625−2.41.8 Soft Rubber2.030−2.62.0 Soft Rubber3.037−2.86.0 Soft Rubber4.445−3.829 TABLE I.Exponential decay constants for P(f)at large forces and Peak Size as explained in the text for various types of beads at various levels of forcing and deformation.peak height occur when the average deformation exceeds roughly30%.To check this trend more directly,we performed the same experiments for a single type of bead(soft rubber) with varying amounts of pressure as shown infigure2.As before,the slope of P(f)remains essentially unchanged (the peak height does not exceed a value of2)until the average degree of deformation exceeds roughly30%.Be-yond this amount of deformation,the peak size increases sharply and P(f)envolves into a much more symmet-ric form(Figure2c,d).Remarkably this evolution in the shape of P(f)does not seem to be connected with a change to Gaussian behavior.Near the peak,the distri-bution is wellfit to an exponential decay(seefitted lines infigures1and2);at larger forces(f>2)the decay is actually slower than exponential and shows the opposite trend to what would be expected if the distribution was to revert to a Gaussian profile at large deformations. An intriging question is to what extent the force distri-bution of the highly compressed rubber packings resem-bles that of a homogeneous block of rubber.The inset offigure2d compares data from the main panel to afit to data from a control experiment performed on a block of rubber on top of a single layer of glass beads.The width of thisfitted distribution is due to the resolution of the carbon paper technique.Note that at large deformations the force distribution from a pack of rubber beads does not resemble that of a rubber block,and in particular the distribution of forces from a pack of rubber beads is significantly broader than that of a rubber block.Be-cause of the limitations of the carbon paper technique we are unable to rule out any residual influence of the single layer of glass beads on thefinal shape of P(f).However, regardless of the effect on the exact form of the probabil-0123456f0.0010.0010.0010.0010.0100.0100.0100.0100.1000.1000.1000.1001.0001.0001.0001.00010.000PPPP012Normalized Force0.101.0010.00ProbabilityF = 1.6 NF = 2.0 NF = 3.0 NF = 4.4 N0.0125% deformationsoft rubber30% deformationsoft rubber37% deformationsoft rubber45% deformationsoft rubberdcbaFIG.2.P(f)with varying applied force.Probability dis-tributions of normal forces at the bottom boundary of amor-phous packings of soft rubber beads.Each plot represents an average over7to9experimental runs.Part c is equivalent tofigure1d.The error bars represent statistical variations among experimental runs.The solid lines arefit to exponen-tials over the large force region.The inset of d compares the data to the Gaussian form obtained from afit to P(f)of a control experiment taken with a block of rubber on top of a single layer of glass beads.ity distribution,any observed changes in this distribution as the type of rubber bead is changed or as the amount of deformation is increased must be connected to properties of the rubber packing itself.Wefind that the degree of deformation of individual particles does play a large role in determining the form of the probability distribution of forces within a granu-lar pack.When the degree of deformation is small,ei-ther with hard particles or with soft particles under a small force,wefind that P(f)has an exponential decay for forces larger than the mean force and a small peak near the mean force,consistent with previous experimen-tal investigations[6–11].As the amount of deformation increases beyond approximately30%the peak near the mean force grows more pronounced.This peaking behav-ior is in agreement with simulations,although at higher deformations than would have beeen expected[11–15]. For forces larger than the mean force,we do not see a Gaussian decay.The distribution continues to decay ex-ponentially(or possibly even slower)at large forces.If the single layer of glass beads at the bottom surface does not alter the shape of the distribution then this depen-dence is contrary to available simulation results and is as yet unexplained.We thank A.Bushmaker, E.Corwin, A.Marshall, M.M¨o bius,and D.Mueth for their assistance with this project.This work was supported by NSF under Grant No.CTS-9710991,by the MRSEC Program of the NSF under Grant No.DMR-9808595,and by the MRSEC REU program at The University of Chicago.。