First Mapping Observations of Two Possible Cloud Collision Candidates IRAS 02459+6029 and 05363
Stata MI命令详解说明书
Multiple imputationEstimate model parameters from each imputation, and combine the results in one easy step using mi estimate .Choose from many supported estimation commands, and simply prefix them with mi estimate . Select how many imputations to use during estimation, request a detailed MI summary, and more.Missing data occur frequently in practice. MI is one of the most flexible ways of handling missing data. Its three stages are multiply imputing missing values, estimating model parameters from each imputed dataset, and combining multiple estimation results in one final inference. In Stata, you can use the mi command to perform these three stages in two simple steps.Use predictive mean matching, linear, logistic, Poisson, and other regressions to impute variables of di erent types. Use multiple imputation using ch ained equations (MICE), multivariate normal imputation (MVN), and monotoneImpute missing values using mi impute .• Support for all three stages of MI: impute missing values,estimate model parameters, and combine estimation results • Imputation– Nine univariate methods– Multivariate methods: MICE (FCS) and MVN – Monotone and arbitrary missing-value pa erns – Add your own methods • Estimation : estimate and combine in one easy step • Inference : linear and nonlinear combinations, hypothesis testing, predictions• MI data : e cient storage, verification, import, full data management• Control Panel to guide you through your MI analysis imputation to impute multiple variables. Add your own imputation methods. With MICE, build flexible imputation models—use any of the nine univariate methods, customize prediction equations, include functions of imputed variables, perform conditional imputation, and more.Already have imputed data? Simply import them to Stata for further MI analysis. For example, to import imputed datasets imp1, imp2, ..., imp5 from NHANES, use. mi import nhanes1 mymidata, using(imp{1-5}) id(obs)Impute missing dataEstimate and combine: One easy step© 2023 StataCorp LLC | Stata is a registered trademark of StataCorp LLC, 4905 Lakeway Drive, College Station, TX 77845, USA./multiple-imputationA er estimation, for example, perform hypothesis testing.At any stage of your analysis, perform data management as if you are working with one dataset, and mi will replicate the changes correctly across the imputed datasets. Stata o ers full data management of MI data: create or drop variables and observations, change values, merge or append files, add imputations, and more.Accidentally dropped an observation from one of the imputed datasets, or changed a value of a variable, or dropped a variable, or ...? Stata verifies the integrity of your MI data each time the mi command is run. (You can also do this manually by using mi update .) For example, Stata checks that complete variables contain the same values in the imputed data as in the original data, that incomplete variables contain the same nonmissing values in the imputed data as in the original,and more.If an inconsistency is detected, Stata tries to fix the problem and notifies you about the result.Estimate transformations of coe cients, compute predictions, and more.Stata’s mi command uniquely provides full data management support, verification of integrity of MI data at any step of the analysis, and multiple formats for storing MI data e ciently. And you can even add your own imputation methods!Use an intuitive MI Control Panel to guide you through all the stages of your MI analysis—from examining missing values and th eir pa erns to performing MI inference. Th e corresponding Stata commands are produced with every step for reproducibility and, if desired, later interactive use.Stata o ers several styles for storing your MI data: you can store imputations in one file or separate files or in one variable or multiple variables. Some styles are more memory e cient, and others are more computationally e cient. Also, some tasks are easier in specific styles.You can start with one style at the beginning of your MI analysis, for example, “full long”, in which imputations are saved as extra observations:. mi set flongIf needed, switch to another style during your mi session, for example, to the wide style, in which imputations are saved as extra variables:. mi convert wideCan’t find an imputation method you need? With li le e ort, you can program your own. Write a program to impute your variables once, and then simply use it with mi impute to obtain multiple imputations.program mi_impute_cmd_mymethod... program imputing missing values once ...end. mi impute mymethod ..., add(5) ...Manage imputed dataMultiple storage formatsAdd your own imputation methodsVerify imputed dataInferenceIn addition...Control Panel。
SPSS中英文对照词典
SPSS中英文对照词典AAbsolute deviation, 绝对离差Absolute number, 绝对数Absolute residuals, 绝对残差Acceleration array, 加速度立体阵Acceleration in an arbitrary direction, 任意方向上的加速度Acceleration normal, 法向加速度Acceleration space dimension, 加速度空间的维数Acceleration tangential, 切向加速度Acceleration vector, 加速度向量Acceptable hypothesis, 可接受假设Accumulation, 累积Accuracy, 准确度Actual frequency, 实际频数Adaptive estimator, 自适应估计量Addition, 相加Addition theorem, 加法定理Additivity, 可加性Adjusted rate, 调整率Adjusted value, 校正值Admissible error, 容许误差Aggregation, 聚集性Alternative hypothesis, 备择假设Among groups, 组间Amounts, 总量Analysis of correlation, 相关分析Analysis of covariance, 协方差分析Analysis of regression, 回归分析Analysis of time series, 时间序列分析Analysis of variance, 方差分析Angular transformation, 角转换ANOVA (analysis of variance), 方差分析ANOVA Models, 方差分析模型Arcing, 弧/弧旋Arcsine transformation, 反正弦变换Area under the curve, 曲线面积AREG , 评估从一个时间点到下一个时间点回归相关时的误差ARIMA, 季节和非季节性单变量模型的极大似然估计Arithmetic grid paper, 算术格纸Arithmetic mean, 算术平均数Arrhenius relation, 艾恩尼斯关系Assessing fit, 拟合的评估Associative laws, 结合律Asymmetric distribution, 非对称分布Asymptotic bias, 渐近偏倚Asymptotic efficiency, 渐近效率Asymptotic variance, 渐近方差Attributable risk, 归因危险度Attribute data, 属性资料Attribution, 属性Autocorrelation, 自相关Autocorrelation of residuals, 残差的自相关Average, 平均数Average confidence interval length, 平均置信区间长度Average growth rate, 平均增长率BBar chart, 条形图Bar graph, 条形图Base period, 基期Bayes' theorem , Bayes定理Bell-shaped curve, 钟形曲线Bernoulli distribution, 伯努力分布Best-trim estimator, 最好切尾估计量Bias, 偏性Binary logistic regression, 二元逻辑斯蒂回归Binomial distribution, 二项分布Bisquare, 双平方Bivariate Correlate, 二变量相关Bivariate normal distribution, 双变量正态分布Bivariate normal population, 双变量正态总体Biweight interval, 双权区间Biweight M-estimator, 双权M估计量Block, 区组/配伍组BMDP(Biomedical computer programs), BMDP统计软件包Boxplots, 箱线图/箱尾图Breakdown bound, 崩溃界/崩溃点CCanonical correlation, 典型相关Caption, 纵标目Case-control study, 病例对照研究Categorical variable, 分类变量Catenary, 悬链线Cauchy distribution, 柯西分布Cause-and-effect relationship, 因果关系Cell, 单元Censoring, 终检Center of symmetry, 对称中心Centering and scaling, 中心化和定标Central tendency, 集中趋势Central value, 中心值CHAID -χ2 Automatic Interaction Detector, 卡方自动交互检测Chance, 机遇Chance error, 随机误差Chance variable, 随机变量Characteristic equation, 特征方程Characteristic root, 特征根Characteristic vector, 特征向量Chebshev criterion of fit, 拟合的切比雪夫准则Chernoff faces, 切尔诺夫脸谱图Chi-square test, 卡方检验/χ2检验Choleskey decomposition, 乔洛斯基分解Circle chart, 圆图Class interval, 组距Class mid-value, 组中值Class upper limit, 组上限Classified variable, 分类变量Cluster analysis, 聚类分析Cluster sampling, 整群抽样Code, 代码Coded data, 编码数据Coding, 编码Coefficient of contingency, 列联系数Coefficient of determination, 决定系数Coefficient of multiple correlation, 多重相关系数Coefficient of partial correlation, 偏相关系数Coefficient of production-moment correlation, 积差相关系数Coefficient of rank correlation, 等级相关系数Coefficient of regression, 回归系数Coefficient of skewness, 偏度系数Coefficient of variation, 变异系数Cohort study, 队列研究Column, 列Column effect, 列效应Column factor, 列因素Combination pool, 合并Combinative table, 组合表Common factor, 共性因子Common regression coefficient, 公共回归系数Common value, 共同值Common variance, 公共方差Common variation, 公共变异Communality variance, 共性方差Comparability, 可比性Comparison of bathes, 批比较Comparison value, 比较值Compartment model, 分部模型Compassion, 伸缩Complement of an event, 补事件Complete association, 完全正相关Complete dissociation, 完全不相关Complete statistics, 完备统计量Completely randomized design, 完全随机化设计Composite event, 联合事件Composite events, 复合事件Concavity, 凹性Conditional expectation, 条件期望Conditional likelihood, 条件似然Conditional probability, 条件概率Conditionally linear, 依条件线性Confidence interval, 置信区间Confidence limit, 置信限Confidence lower limit, 置信下限Confidence upper limit, 置信上限Confirmatory Factor Analysis , 验证性因子分析Confirmatory research, 证实性实验研究Confounding factor, 混杂因素Conjoint, 联合分析Consistency, 相合性Consistency check, 一致性检验Consistent asymptotically normal estimate, 相合渐近正态估计Consistent estimate, 相合估计Constrained nonlinear regression, 受约束非线性回归Constraint, 约束Contaminated distribution, 污染分布Contaminated Gausssian, 污染高斯分布Contaminated normal distribution, 污染正态分布Contamination, 污染Contamination model, 污染模型Contingency table, 列联表Contour, 边界线Contribution rate, 贡献率Control, 对照Controlled experiments, 对照实验Conventional depth, 常规深度Convolution, 卷积Corrected factor, 校正因子Corrected mean, 校正均值Correction coefficient, 校正系数Correctness, 正确性Correlation coefficient, 相关系数Correlation index, 相关指数Correspondence, 对应Counting, 计数Counts, 计数/频数Covariance, 协方差Covariant, 共变Cox Regression, Cox回归Criteria for fitting, 拟合准则Criteria of least squares, 最小二乘准则Critical ratio, 临界比Critical region, 拒绝域Critical value, 临界值Cross-over design, 交叉设计Cross-section analysis, 横断面分析Cross-section survey, 横断面调查Crosstabs , 交叉表Cross-tabulation table, 复合表Cube root, 立方根Cumulative distribution function, 分布函数Cumulative probability, 累计概率Curvature, 曲率/弯曲Curvature, 曲率Curve fit , 曲线拟和Curve fitting, 曲线拟合Curvilinear regression, 曲线回归Curvilinear relation, 曲线关系Cut-and-try method, 尝试法Cycle, 周期Cyclist, 周期性DD test, D检验Data acquisition, 资料收集Data bank, 数据库Data capacity, 数据容量Data deficiencies, 数据缺乏Data handling, 数据处理Data manipulation, 数据处理Data processing, 数据处理Data reduction, 数据缩减Data set, 数据集Data sources, 数据来源Data transformation, 数据变换Data validity, 数据有效性Data-in, 数据输入Data-out, 数据输出Dead time, 停滞期Degree of freedom, 自由度Degree of precision, 精密度Degree of reliability, 可靠性程度Degression, 递减Density function, 密度函数Density of data points, 数据点的密度Dependent variable, 应变量/依变量/因变量Dependent variable, 因变量Depth, 深度Derivative matrix, 导数矩阵Derivative-free methods, 无导数方法Design, 设计Determinacy, 确定性Determinant, 行列式Determinant, 决定因素Deviation, 离差Deviation from average, 离均差Diagnostic plot, 诊断图Dichotomous variable, 二分变量Differential equation, 微分方程Direct standardization, 直接标准化法Discrete variable, 离散型变量DISCRIMINANT, 判断Discriminant analysis, 判别分析Discriminant coefficient, 判别系数Discriminant function, 判别值Dispersion, 散布/分散度Disproportional, 不成比例的Disproportionate sub-class numbers, 不成比例次级组含量Distribution free, 分布无关性/免分布Distribution shape, 分布形状Distribution-free method, 任意分布法Distributive laws, 分配律Disturbance, 随机扰动项Dose response curve, 剂量反应曲线Double blind method, 双盲法Double blind trial, 双盲试验Double exponential distribution, 双指数分布Double logarithmic, 双对数Downward rank, 降秩Dual-space plot, 对偶空间图DUD, 无导数方法Duncan's new multiple range method, 新复极差法/Duncan新法EEffect, 实验效应Eigenvalue, 特征值Eigenvector, 特征向量Ellipse, 椭圆Empirical distribution, 经验分布Empirical probability, 经验概率单位Enumeration data, 计数资料Equal sun-class number, 相等次级组含量Equally likely, 等可能Equivariance, 同变性Error, 误差/错误Error of estimate, 估计误差Error type I, 第一类错误Error type II, 第二类错误Estimand, 被估量Estimated error mean squares, 估计误差均方Estimated error sum of squares, 估计误差平方和Euclidean distance, 欧式距离Event, 事件Event, 事件Exceptional data point, 异常数据点Expectation plane, 期望平面Expectation surface, 期望曲面Expected values, 期望值Experiment, 实验Experimental sampling, 试验抽样Experimental unit, 试验单位Explanatory variable, 说明变量Exploratory data analysis, 探索性数据分析Explore Summarize, 探索-摘要Exponential curve, 指数曲线Exponential growth, 指数式增长EXSMOOTH, 指数平滑方法Extended fit, 扩充拟合Extra parameter, 附加参数Extrapolation, 外推法Extreme observation, 末端观测值Extremes, 极端值/极值FF distribution, F分布F test, F检验Factor, 因素/因子Factor analysis, 因子分析Factor Analysis, 因子分析Factor score, 因子得分Factorial, 阶乘Factorial design, 析因试验设计False negative, 假阴性False negative error, 假阴性错误Family of distributions, 分布族Family of estimators, 估计量族Fanning, 扇面Fatality rate, 病死率Field investigation, 现场调查Field survey, 现场调查Finite population, 有限总体Finite-sample, 有限样本First derivative, 一阶导数First principal component, 第一主成分First quartile, 第一四分位数Fisher information, 费雪信息量Fitted value, 拟合值Fitting a curve, 曲线拟合Fixed base, 定基Fluctuation, 随机起伏Forecast, 预测Four fold table, 四格表Fourth, 四分点Fraction blow, 左侧比率Fractional error, 相对误差Frequency, 频率Frequency polygon, 频数多边图Frontier point, 界限点Function relationship, 泛函关系GGamma distribution, 伽玛分布Gauss increment, 高斯增量Gaussian distribution, 高斯分布/正态分布Gauss-Newton increment, 高斯-牛顿增量General census, 全面普查GENLOG (Generalized liner models), 广义线性模型Geometric mean, 几何平均数Gini's mean difference, 基尼均差GLM (General liner models), 一般线性模型Goodness of fit, 拟和优度/配合度Gradient of determinant, 行列式的梯度Graeco-Latin square, 希腊拉丁方Grand mean, 总均值Gross errors, 重大错误Gross-error sensitivity, 大错敏感度Group averages, 分组平均Grouped data, 分组资料Guessed mean, 假定平均数HHalf-life, 半衰期Hampel M-estimators, 汉佩尔M估计量Happenstance, 偶然事件Harmonic mean, 调和均数Hazard function, 风险均数Hazard rate, 风险率Heading, 标目Heavy-tailed distribution, 重尾分布Hessian array, 海森立体阵Heterogeneity, 不同质Heterogeneity of variance, 方差不齐Hierarchical classification, 组内分组Hierarchical clustering method, 系统聚类法High-leverage point, 高杠杆率点HILOGLINEAR, 多维列联表的层次对数线性模型Hinge, 折叶点Histogram, 直方图Historical cohort study, 历史性队列研究Holes, 空洞HOMALS, 多重响应分析Homogeneity of variance, 方差齐性Homogeneity test, 齐性检验Huber M-estimators, 休伯M估计量Hyperbola, 双曲线Hypothesis testing, 假设检验Hypothetical universe, 假设总体IImpossible event, 不可能事件Independence, 独立性Independent variable, 自变量Index, 指标/指数Indirect standardization, 间接标准化法Individual, 个体Inference band, 推断带Infinite population, 无限总体Infinitely great, 无穷大Infinitely small, 无穷小Influence curve, 影响曲线Information capacity, 信息容量Initial condition, 初始条件Initial estimate, 初始估计值Initial level, 最初水平Interaction, 交互作用Interaction terms, 交互作用项Intercept, 截距Interpolation, 内插法Interquartile range, 四分位距Interval estimation, 区间估计Intervals of equal probability, 等概率区间Intrinsic curvature, 固有曲率Invariance, 不变性Inverse matrix, 逆矩阵Inverse probability, 逆概率Inverse sine transformation, 反正弦变换Iteration, 迭代JJacobian determinant, 雅可比行列式Joint distribution function, 分布函数Joint probability, 联合概率Joint probability distribution, 联合概率分布KK means method, 逐步聚类法Kaplan-Meier, 评估事件的时间长度Kaplan-Merier chart, Kaplan-Merier图Kendall's rank correlation, Kendall等级相关Kinetic, 动力学Kolmogorov-Smirnove test, 柯尔莫哥洛夫-斯米尔诺夫检验Kruskal and Wallis test, Kruskal及Wallis检验/多样本的秩和检验/H检验Kurtosis, 峰度LLack of fit, 失拟Ladder of powers, 幂阶梯Lag, 滞后Large sample, 大样本Large sample test, 大样本检验Latin square, 拉丁方Latin square design, 拉丁方设计Leakage, 泄漏Least favorable configuration, 最不利构形Least favorable distribution, 最不利分布Least significant difference, 最小显著差法Least square method, 最小二乘法Least-absolute-residuals estimates, 最小绝对残差估计Least-absolute-residuals fit, 最小绝对残差拟合Least-absolute-residuals line, 最小绝对残差线Legend, 图例L-estimator, L估计量L-estimator of location, 位置L估计量L-estimator of scale, 尺度L估计量Level, 水平Life expectance, 预期期望寿命Life table, 寿命表Life table method, 生命表法Light-tailed distribution, 轻尾分布Likelihood function, 似然函数Likelihood ratio, 似然比line graph, 线图Linear correlation, 直线相关Linear equation, 线性方程Linear programming, 线性规划Linear regression, 直线回归Linear Regression, 线性回归Linear trend, 线性趋势Loading, 载荷Location and scale equivariance, 位置尺度同变性Location equivariance, 位置同变性Location invariance, 位置不变性Location scale family, 位置尺度族Log rank test, 时序检验Logarithmic curve, 对数曲线Logarithmic normal distribution, 对数正态分布Logarithmic scale, 对数尺度Logarithmic transformation, 对数变换Logic check, 逻辑检查Logistic distribution, 逻辑斯特分布Logit transformation, Logit转换LOGLINEAR, 多维列联表通用模型Lognormal distribution, 对数正态分布Lost function, 损失函数Low correlation, 低度相关Lower limit, 下限Lowest-attained variance, 最小可达方差LSD, 最小显著差法的简称Lurking variable, 潜在变量MMain effect, 主效应Major heading, 主辞标目Marginal density function, 边缘密度函数Marginal probability, 边缘概率Marginal probability distribution, 边缘概率分布Matched data, 配对资料Matched distribution, 匹配过分布Matching of distribution, 分布的匹配Matching of transformation, 变换的匹配Mathematical expectation, 数学期望Mathematical model, 数学模型Maximum L-estimator, 极大极小L 估计量Maximum likelihood method, 最大似然法Mean, 均数Mean squares between groups, 组间均方Mean squares within group, 组内均方Means (Compare means), 均值-均值比较Median, 中位数Median effective dose, 半数效量Median lethal dose, 半数致死量Median polish, 中位数平滑Median test, 中位数检验Minimal sufficient statistic, 最小充分统计量Minimum distance estimation, 最小距离估计Minimum effective dose, 最小有效量Minimum lethal dose, 最小致死量Minimum variance estimator, 最小方差估计量MINITAB, 统计软件包Minor heading, 宾词标目Missing data, 缺失值Model specification, 模型的确定Modeling Statistics , 模型统计Models for outliers, 离群值模型Modifying the model, 模型的修正Modulus of continuity, 连续性模Morbidity, 发病率Most favorable configuration, 最有利构形Multidimensional Scaling (ASCAL), 多维尺度/多维标度Multinomial Logistic Regression , 多项逻辑斯蒂回归Multiple comparison, 多重比较Multiple correlation , 复相关Multiple covariance, 多元协方差Multiple linear regression, 多元线性回归Multiple response , 多重选项Multiple solutions, 多解Multiplication theorem, 乘法定理Multiresponse, 多元响应Multi-stage sampling, 多阶段抽样Multivariate T distribution, 多元T分布Mutual exclusive, 互不相容Mutual independence, 互相独立NNatural boundary, 自然边界Natural dead, 自然死亡Natural zero, 自然零Negative correlation, 负相关Negative linear correlation, 负线性相关Negatively skewed, 负偏Newman-Keuls method, q检验NK method, q检验No statistical significance, 无统计意义Nominal variable, 名义变量Nonconstancy of variability, 变异的非定常性Nonlinear regression, 非线性相关Nonparametric statistics, 非参数统计Nonparametric test, 非参数检验Nonparametric tests, 非参数检验Normal deviate, 正态离差Normal distribution, 正态分布Normal equation, 正规方程组Normal ranges, 正常范围Normal value, 正常值Nuisance parameter, 多余参数/讨厌参数Null hypothesis, 无效假设Numerical variable, 数值变量OObjective function, 目标函数Observation unit, 观察单位Observed value, 观察值One sided test, 单侧检验One-way analysis of variance, 单因素方差分析Oneway ANOVA , 单因素方差分析Open sequential trial, 开放型序贯设计Optrim, 优切尾Optrim efficiency, 优切尾效率Order statistics, 顺序统计量Ordered categories, 有序分类Ordinal logistic regression , 序数逻辑斯蒂回归Ordinal variable, 有序变量Orthogonal basis, 正交基Orthogonal design, 正交试验设计Orthogonality conditions, 正交条件ORTHOPLAN, 正交设计Outlier cutoffs, 离群值截断点Outliers, 极端值OVERALS , 多组变量的非线性正规相关Overshoot, 迭代过度PPaired design, 配对设计Paired sample, 配对样本Pairwise slopes, 成对斜率Parabola, 抛物线Parallel tests, 平行试验Parameter, 参数Parametric statistics, 参数统计Parametric test, 参数检验Partial correlation, 偏相关Partial regression, 偏回归Partial sorting, 偏排序Partials residuals, 偏残差Pattern, 模式Pearson curves, 皮尔逊曲线Peeling, 退层Percent bar graph, 百分条形图Percentage, 百分比Percentile, 百分位数Percentile curves, 百分位曲线Periodicity, 周期性Permutation, 排列P-estimator, P估计量Pie graph, 饼图Pitman estimator, 皮特曼估计量Pivot, 枢轴量Planar, 平坦Planar assumption, 平面的假设PLANCARDS, 生成试验的计划卡Point estimation, 点估计Poisson distribution, 泊松分布Polishing, 平滑Polled standard deviation, 合并标准差Polled variance, 合并方差Polygon, 多边图Polynomial, 多项式Polynomial curve, 多项式曲线Population, 总体Population attributable risk, 人群归因危险度Positive correlation, 正相关Positively skewed, 正偏Posterior distribution, 后验分布Power of a test, 检验效能Precision, 精密度Predicted value, 预测值Preliminary analysis, 预备性分析Principal component analysis, 主成分分析Prior distribution, 先验分布Prior probability, 先验概率Probabilistic model, 概率模型probability, 概率Probability density, 概率密度Product moment, 乘积矩/协方差Profile trace, 截面迹图Proportion, 比/构成比Proportion allocation in stratified random sampling, 按比例分层随机抽样Proportionate, 成比例Proportionate sub-class numbers, 成比例次级组含量Prospective study, 前瞻性调查Proximities, 亲近性Pseudo F test, 近似F检验Pseudo model, 近似模型Pseudosigma, 伪标准差Purposive sampling, 有目的抽样QQR decomposition, QR分解Quadratic approximation, 二次近似Qualitative classification, 属性分类Qualitative method, 定性方法Quantile-quantile plot, 分位数-分位数图/Q-Q图Quantitative analysis, 定量分析Quartile, 四分位数Quick Cluster, 快速聚类RRadix sort, 基数排序Random allocation, 随机化分组Random blocks design, 随机区组设计Random event, 随机事件Randomization, 随机化Range, 极差/全距Rank correlation, 等级相关Rank sum test, 秩和检验Rank test, 秩检验Ranked data, 等级资料Rate, 比率Ratio, 比例Raw data, 原始资料Raw residual, 原始残差Rayleigh's test, 雷氏检验Rayleigh's Z, 雷氏Z值Reciprocal, 倒数Reciprocal transformation, 倒数变换Recording, 记录Redescending estimators, 回降估计量Reducing dimensions, 降维Re-expression, 重新表达Reference set, 标准组Region of acceptance, 接受域Regression coefficient, 回归系数Regression sum of square, 回归平方和Rejection point, 拒绝点Relative dispersion, 相对离散度Relative number, 相对数Reliability, 可靠性Reparametrization, 重新设置参数Replication, 重复Report Summaries, 报告摘要Residual sum of square, 剩余平方和Resistance, 耐抗性Resistant line, 耐抗线Resistant technique, 耐抗技术R-estimator of location, 位置R估计量R-estimator of scale, 尺度R估计量Retrospective study, 回顾性调查Ridge trace, 岭迹Ridit analysis, Ridit分析Rotation, 旋转Rounding, 舍入Row, 行Row effects, 行效应Row factor, 行因素RXC table, RXC表SSample, 样本Sample regression coefficient, 样本回归系数Sample size, 样本量Sample standard deviation, 样本标准差Sampling error, 抽样误差SAS(Statistical analysis system ), SAS统计软件包Scale, 尺度/量表Scatter diagram, 散点图Schematic plot, 示意图/简图Score test, 计分检验Screening, 筛检SEASON, 季节分析Second derivative, 二阶导数Second principal component, 第二主成分SEM (Structural equation modeling), 结构化方程模型Semi-logarithmic graph, 半对数图Semi-logarithmic paper, 半对数格纸Sensitivity curve, 敏感度曲线Sequential analysis, 贯序分析Sequential data set, 顺序数据集Sequential design, 贯序设计Sequential method, 贯序法Sequential test, 贯序检验法Serial tests, 系列试验Short-cut method, 简捷法Sigmoid curve, S形曲线Sign function, 正负号函数Sign test, 符号检验Signed rank, 符号秩Significance test, 显著性检验Significant figure, 有效数字Simple cluster sampling, 简单整群抽样Simple correlation, 简单相关Simple random sampling, 简单随机抽样Simple regression, 简单回归simple table, 简单表Sine estimator, 正弦估计量Single-valued estimate, 单值估计Singular matrix, 奇异矩阵Skewed distribution, 偏斜分布Skewness, 偏度Slash distribution, 斜线分布Slope, 斜率Smirnov test, 斯米尔诺夫检验Source of variation, 变异来源Spearman rank correlation, 斯皮尔曼等级相关Specific factor, 特殊因子Specific factor variance, 特殊因子方差Spectra , 频谱Spherical distribution, 球型正态分布Spread, 展布SPSS(Statistical package for the social science), SPSS统计软件包Spurious correlation, 假性相关Square root transformation, 平方根变换Stabilizing variance, 稳定方差Standard deviation, 标准差Standard error, 标准误Standard error of difference, 差别的标准误Standard error of estimate, 标准估计误差Standard error of rate, 率的标准误Standard normal distribution, 标准正态分布Standardization, 标准化Starting value, 起始值Statistic, 统计量Statistical control, 统计控制Statistical graph, 统计图Statistical inference, 统计推断Statistical table, 统计表Steepest descent, 最速下降法Stem and leaf display, 茎叶图Step factor, 步长因子Stepwise regression, 逐步回归Storage, 存Strata, 层(复数)Stratified sampling, 分层抽样Stratified sampling, 分层抽样Strength, 强度Stringency, 严密性Structural relationship, 结构关系Studentized residual, 学生化残差/t化残差Sub-class numbers, 次级组含量Subdividing, 分割Sufficient statistic, 充分统计量Sum of products, 积和Sum of squares, 离差平方和Sum of squares about regression, 回归平方和Sum of squares between groups, 组间平方和Sum of squares of partial regression, 偏回归平方和Sure event, 必然事件Survey, 调查Survival, 生存分析Survival rate, 生存率Suspended root gram, 悬吊根图Symmetry, 对称Systematic error, 系统误差Systematic sampling, 系统抽样TTags, 标签Tail area, 尾部面积Tail length, 尾长Tail weight, 尾重Tangent line, 切线Target distribution, 目标分布Taylor series, 泰勒级数Tendency of dispersion, 离散趋势Testing of hypotheses, 假设检验Theoretical frequency, 理论频数Time series, 时间序列Tolerance interval, 容忍区间Tolerance lower limit, 容忍下限Tolerance upper limit, 容忍上限Torsion, 扰率Total sum of square, 总平方和Total variation, 总变异Transformation, 转换Treatment, 处理Trend, 趋势Trend of percentage, 百分比趋势Trial, 试验Trial and error method, 试错法Tuning constant, 细调常数Two sided test, 双向检验Two-stage least squares, 二阶最小平方Two-stage sampling, 二阶段抽样Two-tailed test, 双侧检验Two-way analysis of variance, 双因素方差分析Two-way table, 双向表Type I error, 一类错误/α错误Type II error, 二类错误/β错误UUMVU, 方差一致最小无偏估计简称Unbiased estimate, 无偏估计Unconstrained nonlinear regression , 无约束非线性回归Unequal subclass number, 不等次级组含量Ungrouped data, 不分组资料Uniform coordinate, 均匀坐标Uniform distribution, 均匀分布Uniformly minimum variance unbiased estimate, 方差一致最小无偏估计Unit, 单元Unordered categories, 无序分类Upper limit, 上限Upward rank, 升秩VVague concept, 模糊概念Validity, 有效性VARCOMP (Variance component estimation), 方差元素估计Variability, 变异性Variable, 变量Variance, 方差Variation, 变异Varimax orthogonal rotation, 方差最大正交旋转Volume of distribution, 容积WW test, W检验Weibull distribution, 威布尔分布Weight, 权数Weighted Chi-square test, 加权卡方检验/Cochran检验Weighted linear regression method, 加权直线回归Weighted mean, 加权平均数Weighted mean square, 加权平均方差Weighted sum of square, 加权平方和Weighting coefficient, 权重系数Weighting method, 加权法W-estimation, W估计量W-estimation of location, 位置W估计量Width, 宽度Wilcoxon paired test, 威斯康星配对法/配对符号秩和检验Wild point, 野点/狂点Wild value, 野值/狂值Winsorized mean, 缩尾均值Withdraw, 失访YYouden's index, 尤登指数ZZ test, Z检验Zero correlation, 零相关Z-transformation, Z变换。
代数英语
(0,2) 插值||(0,2) interpolation0#||zero-sharp; 读作零井或零开。
0+||zero-dagger; 读作零正。
1-因子||1-factor3-流形||3-manifold; 又称“三维流形”。
AIC准则||AIC criterion, Akaike information criterionAp 权||Ap-weightA稳定性||A-stability, absolute stabilityA最优设计||A-optimal designBCH 码||BCH code, Bose-Chaudhuri-Hocquenghem codeBIC准则||BIC criterion, Bayesian modification of the AICBMOA函数||analytic function of bounded mean oscillation; 全称“有界平均振动解析函数”。
BMO鞅||BMO martingaleBSD猜想||Birch and Swinnerton-Dyer conjecture; 全称“伯奇与斯温纳顿-戴尔猜想”。
B样条||B-splineC*代数||C*-algebra; 读作“C星代数”。
C0 类函数||function of class C0; 又称“连续函数类”。
CA T准则||CAT criterion, criterion for autoregressiveCM域||CM fieldCN 群||CN-groupCW 复形的同调||homology of CW complexCW复形||CW complexCW复形的同伦群||homotopy group of CW complexesCW剖分||CW decompositionCn 类函数||function of class Cn; 又称“n次连续可微函数类”。
Cp统计量||Cp-statisticC。
The Sudbury Neutrino Observatory
a r X i v :n u c l -e x /9910016v 2 3 N o v 1999The Sudbury Neutrino ObservatoryThe SNO CollaborationJ.Boger,R.L.Hahn,J.K.RowleyChemistry Department,Brookhaven National Laboratory,Upton,NY 11973-5000USA 1A.L.Carter,B.Hollebone,D.Kessler Carleton University,Ottawa,Ontario K1S 5B6CANADA 2I.Blevis,F.Dalnoki-Veress,A.DeKok,J.Farine,3D.R.Grant,C.K.Hargrove,berge,I.Levine,K.McFarlane,H.Mes,A.T.Noble,V.M.Novikov,M.O’Neill,M.Shatkay,C.Shewchuk,D.Sinclair Centre for Research in Particle Physics,Herzberg Laboratory,Carleton University,Ottawa,Ontario K1S 5B6CANADA 2E.T.H.Clifford,R.Deal,E.D.Earle,E.Gaudette,ton,B.Sur Chalk River Laboratories,AECL Research,Chalk River,Ontario K0J 1J0CANADA 2J.Bigu,J.H.M.Cowan,D.L.Cluff,E.D.Hallman,R.U.Haq,J.Hewett,J.G.Hykawy,G.Jonkmans,4R.Michaud,A.Roberge,J.Roberts,E.Saettler,M.H.Schwendener,H.Seifert,D.Sweezey,R.Tafirout,C.J.VirtueDepartment of Physics and Astronomy,Laurentian University,Sudbury,OntarioP3E 2C6CANADA 2D.N.Beck,Y.D.Chan,X.Chen,M.R.Dragowsky,21F.W.Dycus,J.Gonzalez,M.C.P.Isaac,5Y.Kajiyama,G.W.Koehler,K.T.Lesko,M.C.Moebus,E.B.Norman,C.E.Okada,A.W.P.Poon,P.Purgalis,A.Schuelke,A.R.Smith,R.G.Stokstad,S.Turner,6I.Zlimen 7Lawrence Berkeley National Laboratory,Berkeley,CA94720USA1J.M.Anaya,T.J.Bowles,S.J.Brice,Ernst-Ingo Esch, M.M.Fowler,Azriel Goldschmidt,5A.Hime,A.F.McGirt,ler,W.A.Teasdale,J.B.Wilhelmy,J.M.WoutersLos Alamos National Laboratory,Los Alamos,NM87545USA1 J.D.Anglin,M.Bercovitch,W.F.Davidson,R.S.Storey18 National Research Council of Canada,Ottawa,Ontario K1A0R6CANADA2 S.Biller,R.A.Black,R.J.Boardman,M.G.Bowler,J.Cameron,B.Cleveland,A.P.Ferraris,G.Doucas,H.Heron, C.Howard,N.A.Jelley,A.B.Knox,y,W.Locke,J.Lyon,S.Majerus,M.Moorhead,M.Omori,N.W.Tanner,R.K.Taplin,M.Thorman,D.L.Wark,N.West,J.C.Barton,P.T.TrentNuclear and Astrophysics Laboratory,Oxford University,Keble Road,Oxford,OX13RH,UK8R.Kouzes,10M.M.Lowry9Department of Physics,Princeton University,Princeton,NJ08544USAA.L.Bell,E.Bonvin,11M.Boulay,M.Dayon,F.Duncan, L.S.Erhardt,H.C.Evans,G.T.Ewan,R.Ford,12A.Hallin, A.Hamer,P.M.Hart,P.J.Harvey,D.Haslip,C.A.W.Hearns,R.Heaton,J.D.Hepburn,C.J.Jillings,E.P.Korpach,H.W.Lee,J.R.Leslie,M.-Q.Liu,H.B.Mak,A.B.McDonald,J.D.MacArthur,W.McLatchie,B.A.Moffat,S.Noel,T.J.Radcliffe,13B.C.Robertson,P.Skensved,R.L.Stevenson,X.ZhuDepartment of Physics,Queen’s University,Kingston,Ontario K7L3N6CANADA2S.Gil,14J.Heise,R.L.Helmer,15R.J.Komar,C.W.Nally,H.S.Ng,C.E.WalthamDepartment of Physics and Astronomy,University of British Columbia,Vancouver,BC V6T2A6CANADA2R.C.Allen,16G.B¨u hler,17H.H.Chen18 Department of Physics,University of California,Irvine,CA92717USAG.Aardsma,T.Andersen,19K.Cameron,M.C.Chon,R.H.Hanson,P.Jagam,J.Karn,w,R.W.Ollerhead,J.J.Simpson,N.Tagg,J.-X.WangPhysics Department,University of Guelph,Guelph,Ontario N1G2W1CANADA2 C.Alexander,E.W.Beier,J.C.Cook,D.F.Cowen,E.D.Frank, W.Frati,P.T.Keener,J.R.Klein,G.Mayers,D.S.McDonald, M.S.Neubauer,F.M.Newcomer,R.J.Pearce,R.G.Van de Water,R.Van Berg,P.Wittich Department of Physics and Astronomy,University of Pennsylvania,Philadelphia,PA19104-6396,USA1Q.R.Ahmad,J.M.Beck,20M.C.Browne,21T.H.Burritt, P.J.Doe,C.A.Duba,S.R.Elliott,J.E.Franklin, J.V.Germani,22P.Green,15A.A.Hamian,K.M.Heeger,M.Howe,R.Meijer Drees,A.Myers,R.G.H.Robertson, M.W.E.Smith,T.D.Steiger,T.Van Wechel,J.F.Wilkerson Nuclear Physics Laboratory and Department of Physics,University of Washington,P.O.Box351560,Seattle,WA98195USA11IntroductionThe Sudbury Neutrino Observatory(SNO)has been constructed to study the fundamental properties of neutrinos,in particular the mass and mixing pa-rameters.Neutrino oscillations between the electron-flavor neutrino,νe,and another neutrinoflavor have been proposed[1]as an explanation of the ob-served shortfall in theflux of solarνe reaching the earth,as compared with theoretical expectations[2].SNO can test that hypothesis by measuring the flux ofνe which are produced in the sun,and comparing it to theflux of all activeflavors of solar neutrinos detected on earth in an appropriate energy interval.Observation of neutrinoflavor transformation through this compari-son would be compelling evidence of neutrino mass.Non-zero neutrino mass is evidence for physics beyond the Standard Model of fundamental particle interactions[3].The long distance to the sun makes the search for neutrino mass sensitiveto much smaller mass splittings than can be studied with terrestrial sources. Vacuum oscillations can change the ratio of neutral-current to charged-current interactions,produce spectral distortions,and introduce time dependence in the measured rates.Furthermore,the matter density in the sun is sufficiently large to enhance the effects of small mixing between electron neutrinos and mu or tau neutrinos.This matter,or MSW[4],effect also applies when so-lar neutrinos traverse the earth,and may cause distinctive time and spectral modulations of the signal ofνe in the SNO detector.Measurement of these effects,made possible by the high counting rate and separableνe–specific and flavor–independent responses of the SNO detector,will permit the determina-tion of unique mass and mixing parameters when combined with the existing results of the light-waterˇCerenkov detectors,37Cl,and71Ga solar neutrino experiments[5–7].The SNO experiment is unique in that it utilizes heavy water,D2O,in a spher-ical volume of one kilotonne as a target.The heavy water permits detection of neutrinos through the reactionsνx+e−→νx+e−(1)νe+d→e−+p+p(2)νx+d→νx+n+p(3) whereνx refers to any activeflavor of neutrino.Each of these interactions is detected when one or more electrons produceˇCerenkov light that impinges on a phototube array.The elastic scattering(ES)of electrons by neutrinos (Eq.1)is highly directional,and establishes the sun as the source of the de-tected neutrinos.The charged-current(CC)absorption ofνe on deuterons (Eq.2)produces an electron with an energy highly correlated with that of the neutrino.This reaction is sensitive to the energy spectrum ofνe and hence to deviations from the parent spectrum.The neutral-current(NC)disintegra-tion of the deuteron by neutrinos(Eq.3)is independent of neutrinoflavor and has a threshold of2.2MeV.To be detected,the resulting neutron must be absorbed,giving a6.25MeV photon for absorption on deuterium or photons totalling8.6MeV for absorption on35Cl with MgCl2added to the D2O.The photon subsequently Compton scatters,imparting enough energy to electrons to createˇCerenkov light.(Once the special–purpose neutral-current detectors have been installed,they will provide the primary neutron detection mecha-nism.See Section8for details.)Measurement of the rate of the NC reaction determines the totalflux of8B neutrinos,even if theirflavor has been trans-formed to another activeflavor(but not if to a sterile neutrino).The ability to measure the CC and NC reactions separately is unique to SNO and makes the interpretation of the results of the experiment independent of theoretical astrophysics calculations.All experiments performed to date have detected fewer solar neutrinos than are expected from standard solar models[8].There are strong hints that this is the result of neutrinoflavor transformation between the production point in the sun and the terrestrial detection point.However,these conclusions are based on reference to a calculated prediction.Through direct measurements of theνe–specific CC reaction and theflavor–independent NC reaction,the SNO detector will be thefirst experiment to make a solar–model–independent measurement of the solar neutrinoflux.SNO can make contributions,some of which are unique,in other areas of physics.An example of the latter is a search for the relic supernova neutrinos integrated over all past supernovae.For the relic supernova neutrinos,the interaction ofH O2roomFig.1.General Layout of the SNO Laboratory.A description of the electronic readout chain is given in Section6,followed by a description of the downstream data acquisition hardware and software in Section7.Separate3He neutral-current detectors,which will be installed at a later date in the D2O volume,are described in Section8.The detector con-trol and monitoring system is described in Section9.The calibration system, consisting of a variety of sources,a manipulator,controlling software,and the analysis of calibration data,is described in Section10.The extensive effort to maintain and monitor site cleanliness during and after construction is de-scribed in Section11.The large software package used to generate simulated data and analyze acquired data is described in Section12.Brief descriptions of the present detector status and future plans are in Section13.2Water SystemsThe SNO water system is comprised of two separate systems:one for the ul-trapure light water(H2O)and one for the heavy water(D2O).These systems are located underground near the detector.The source of water for the H2O system is a surface purification plant that produces potable water for the mine. Underground,the water is pretreated,purified and degassed to levels accept-able for the SNO detector,regassed with pure N2,andfinally cooled before it is put into the detector.Ultrapure water leaches out soluble components whenFig.2.The PMT support structure(PSUP)shown inside the SNO cavity,sur-rounding the acrylic vessel,with light water and heavy water volumes located as indicated.in contact with solid surfaces.It may also support biological activity on such surfaces or on suspended particles.The H2O is therefore deoxygenated and continuously circulated to remove ions,organics and suspended solids.Both liquids are also assayed continuously to monitor radioactive contaminants. Incoming potable water contains sand and silt particles,bacteria,algae,inor-ganic salts,organic molecules and gasses(N2,O2,CO2,Rn,etc.).After falling a total of6800feet,the water is supersaturated with air,sofirst it enters a deaerator tank(seefig.3)where it spends a few minutes so that some of the dissolved O2and N2comes out.It then passes into a multimediafilter con-2µµµ254 nm UV 10 CWaterTo CavityMnO column Degasser Charcoal Zeolite Softeners 10Filters Multimedia FilterChiller Filters Deaerator 10 tonne level control tank From Monitor Points From Cavity Reverse Osmosis Monitor Filters 3INCO the mine Tanks in EDTA Process Degasser UV 185 nm Ion Columns Exchange N Re-Filters 0.12gasser Fig.3.The light water system.sisting of a bed of sand and charcoal to remove large particles followed by a 10-micron filter to remove fine particles.The water then enters the laboratory water utility room.A charcoal filter is used to reduce the levels of organic contaminants and to convert free chlorine into chloride since chlorine would damage the reverse osmosis (RO)unit further downstream.After the charcoal filter,the water passes into softeners consisting of two 0.14m 3bottles containing strong-base Purolite C100-E cation exchange resin.Here divalent ions such as Ca and Mg are exchanged for Na ions.The softeners also remove iron and stabilize colloidal particles so they do not coagulate when concentrated by the RO membranes.A 9.1%solution of sodium ethylene diamine tetraacetate (EDTA)and sodium bisulphate is injected at 9ml/min.to complex various ion species (e.g.,Al)and to reduce O and Cl into a form that can be rejected by the RO.Then two filter units,each containing twelve 25-cm long,3-µm filters,remove suspended particles.A silt density index test is done at this point on the running system.The reverse-osmosis process is the workhorse of the purification system.Twelve spiral-wound thin film composite (polyamide on polysulfone)membranes each5.6m 2in area reduce inorganic salt levels by a factor of at least 20and reduce organics and hence the EDTA and particles larger than molecular weight 200with greater than 99%efficiency.The RO performance is monitored online by percentage rejection (typically 97.5%)and conductivity monitors.After the detector is full the RO does not have to be used again unless SNO requires substantial amounts of make-up water in the detector.After the RO,the water enters a 185-nm UV unit consisting of mercury lampsand quartz sleeves where any remaining organic compounds are broken apart into ionic form.The water next goes to an ion-exchange unit that removes remaining dissolved ionized impurities left by the RO.These are two sets of six bottles in parallel containing0.1m3of Purolite nuclear grade NWR-37 mixed(cation and anion)bed resins.The exiting water has a resistivity of18.2MΩ-cm.A custom–designed Process Degasser(PD)is used to reduce the O2and Rn levels by factors of about1000and50,respectively,in the water[9].The PD consists of a large electropolished stainless steel vessel(81cm diameter by6m high)containing shower heads,spherical polypropylene packing and heater el-ements and pumped with a mechanical booster pump(Edwards EH500A) backed by a four-stage positive displacement rotary pump(Edwards QDP80 Drystar).Vacuum is maintained at20torr and water vapourflow rate at one kg/hr.The PD removes all gases from the water and can cause,by dif-fusion,low pressures inside the underwater PMT connectors.Low pressure compromises the breakdown voltage of the connectors,and in a successful ef-fort to reduce breakdowns that occurred with degassed water,the water was regassed with pure nitrogen to atmospheric pressure at the2000-m depth us-ing a gas permeable membrane unit.This unit is followed by0.1-µmfilters to remove particulates.Then a254-nm UV unit is used to kill bacteria.Finally a chiller cools the water to10◦C before water is put into the detector at a rate of150l/min.The water mass between the AV and the PMTs is about1700tonnes and the mass between the PMTs and the cavity walls is5700tonnes.Water enters the detector between the AV and PMTs.Because this region has to be cleaner than the region outside,a99.99%leak-tight plastic barrier seals the back of the PMTs.Water in the outer region is dirtier due to its large content of submersed material(cables,steel support,cavity liner,etc.).Water is drawn from this region back to the utility room and into a recirculation loop.This loop consists of a ten-tonne polypropylene tank(used by the control loop to maintain constant water level in the detector),thefirst UV unit,the ion exchange columns,the process degasser,the N2regassing unit,the0.1-µm filters,the second UV unit and the chiller.The recirculated light water is assayed regularly for pH,conductivity,turbidity,anions,cations,suspended solids,dissolved gases,and radioactivity.This is accomplished by means of six sample pipes in the H2O volume.The heavy water system is designed to perform the following functions:•Receive the D2O and make an initial purification to reduce the amount of contamination reaching the main system;•Purify the D2O with or without the MgCl2additive;•Assay the D2O or D2O brine to make an accurate background determina-AC Absorption Columns (MnOx beads)FR 0.1 micron filters Monitor Degasser Process Degasser MDG PDG SUF Seeded Ultrafiltration unit UFR Ultrafiltration unit ROF Reverse Osmosis Filter Fig.4.The heavy water system.tion;•Manage the addition and removal of the MgCl 2additive on a time scale short relative to the expected running times;A diagram of the system is shown in Fig.4.The source of D 2O was the Ontario Hydro Bruce heavy water plant beside Lake Huron.The D 2O was trucked to the SNO site and transported underground.It was cleaned and stored temporarily before it was put into the acrylic vessel (AV).Heavy water delivered to the lab was first passed through ion-exchange columns to reduce its ionic content,in particular its K content.The D 2O then went into a large polypropylene-lined tank.During the filling of the H 2O and D 2O,the D 2O was filled at such a rate as to maintain zero pressure differential across the bottom of the AV,which assured that the vessel surface was generally in compression.With the SNO detector filled,the D 2O is recirculated to maintain its purity.This is accomplished by an RO system consisting of five separate pressure housing units and the membranes contained within.Two large (22cm diam-eter and 5.5m length)units are used in parallel for the purification of the recirculating heavy water.The concentrate stream containing the radioactive ions passes through adsorption columns to decrease the Th,Ra and Pb con-centrations.Table1Isotopic composition of SNO heavy water. Isotope Isotope2H17O0.097(10)µCi/kg0.71(7)% 1H16Oadsorber for Ra with an extraction efficiency of about90%[11].Water is passed at20l/min.through a1-l column containing the MnO x which is removed off-line for counting of the Ra daughters using an electrostatic device in which charged ions are deposited on the surface of a silicon detector[12].The number of222Rn(3.8-d half-life)is measured in six tonnes of liquid taken from one of six regions in the H2O or one of six regions in the D2O.The six tonnes of water are put through a small vacuum degasser[13]at20l/min. The gasses which are removed from the water are primarily N2,O2,Ar,CO2 and a few hundred atoms of222Rn,as well as10ml/min in the form of water vapour.Rn is subsequently frozen out in a U-shaped trap cooled with liquid nitrogen(-192C)and transferred to a ZnS coated scintillation cell[14]for counting.To enhance the neutral-current detection,MgCl2will be added to the water to take advantage of the larger neutron-capture cross section of Cl relative to deuterium.A concentration of approximately0.2%will be used.A D2O brine solution that has been purified prior to transport underground is put into a large polypropylene lined tank.When it is time to add the MgCl2to the AV, this tank will slowly be emptied.At the same time,an equivalent volume of salt-free D2O will be taken out of the AV and put into a second polypropylene lined tank.To take the MgCl2out of the AV,the third RO unit will be used in combination with the two main ones to desalinate the water to about100parts per million (ppm).In a second desalination pass,a fourth,smaller RO unit will be added to the process to desalinate the D2O to about one ppm.3Acrylic VesselThe D2O containment vessel must meet diverse requirements,some of which present opposing design constraints.The primary design criteria for the con-tainment vessel are:•Isolate1000tonnes of D2O from surrounding H2O.•Maintain structural integrity and performance over ten years while im-mersed in ultrapure D2O and H2O and subjected to the seismic activity expected in an operating mine.•Minimize the total mass of radioactive impurities.•Maximize optical performance.•Design for construction in the mine.Fig.5.The SNO acrylic containment vessel.The design criteria listed above resulted in the containment vessel shown in Fig.5.A12meter diameter sphere was chosen as the optimum shape for the contain-ment vessel.A sphere has the largest volume-to-surface ratio,and optimum stress distribution.This reduces the mass of acrylic needed,and hence the amount of radioactivity.The sphere is suspended from ten loops of rope,which are attached to the vessel by means of rope grooves located around the equa-tor of the sphere.The choice of rope suspension was driven by radioactivity and optical considerations:a rope under tension offers excellent load bearing capacity for a minimum mass,and it presents the minimum obstruction of the ˇCerenkov light that neutrino interactions produce.To allow the insertion of calibration devices and the installation of the3He neutron detector strings,the sphere is provided with a1.5-m diameter by6.8-m tall chimney.Piping also enters and exits the vessel through this chimney. Forfilling and purification recirculation requirements7.6-cm diameter pipes introduce D2O at the bottom of the vessel and remove it from the top of the chimney.D2O may be extracted via3.8-cm diameter pipes from four different levels in the vessel and one in the chimney for measurements of both chemical and radioactive water quality.A total of122ultraviolet transmitting(UVT)acrylic panels were used in the construction of the spherical part of the containment vessel.These panels were nominally5.6cm thick,with the exception of the ten equatorial panels containing grooves for the suspension ropes.These rope groove panels were nominaly11.4cm thick.Acrylic sheet was chosen as the construction material for a number of reasons.A simple hydrocarbon,UVT acrylic can be manu-factured with very low intrinsic radioactivity.The light transmission of UVT acrylic matches reasonably the spectral response of the PMTs.Cast acrylic sheet is commercially available in sizes acceptable for mine transportation.It is readily thermoformed into spherical segments and is easily machined.It is capable of being bonded together with bond strengths close to that of the parent material.Theflat cast acrylic panels werefirst thermoformed into spherical sections by slump forming into a female mold formed from polished aluminum plate with a radius of6.06m(the outer radius of the sphere).The panels were then machined to the correct shape on afive-axis milling machine while at a constant temperature of21◦C.To avoid contamination,only clean water was used as a lubricant during machining operations.Afinal check of the dimensions was made by“dry assembling”the panels on a special framework to formfirst the upper hemisphere,then the lower hemisphere.This ensured that all the panels could befitted together within the specified tolerance of±25mm on the spherical curvature required by the buckling criteria.During construction the curvature was typically maintained to±6mm.The panels were bonded together using a partially polymerized adhesive that cures at room temperature.This adhesive was formulated by Reynolds Poly-mer Technology Inc.,the fabricators of the acrylic vessel.The bonds are ap-proximately3mm thick and special allowance has to be made for the20% shrinkage during polymerization of the adhesive.It took considerable R&D and construction time to obtain over500m of adequate quality bonds.All bond strengths exceeded27.5MPa.Prior to bonding the main vessel the bond-ing techniques were prototyped by building two small(seven-panel)sections of the sphere.First the upper hemisphere of the vessel was built and the chimney attached. The chimney of the containment vessel was made offive cylinders of ultraviolet absorbing(UVA)cast acrylic to reduce the“piping”of unwantedˇCerenkov light from these regions into the inner volume of the detector.The hemisphere was then raised into itsfinal position and suspended on its ten supporting ropes.Vectranfibers were chosen as the material for the sus-pension ropes,not only for their low radioactivity but also for their ability to retain strength during long term exposure to ultrapure water.A total of 300kg of Vectranfilament was supplied by Hoechst-Celanese Company.The filaments were twisted into rope by Yale Cordage.Due to the large surface area of thefilaments this represents a significant contamination potential due to dust deposition.Therefore the twisting machines were carefully cleaned and tented in plastic prior to use.Each of the ten loops supporting the vessel is approximately30m long and24.4mm in diameter.In normal operation, each loop is continuously loaded tofive tonnes,or approximately10%of its ultimate strength of500,000N.Working from a suspended construction platform,subsequent rings of the lower hemisphere were constructed and attached to the hanging upper hemi-sphere.As construction progressed the platform was lowered until thefinal “south pole”disk was installed.The principal features of the containment vessel are listed in Table2.If acrylic is subject to excessive stress for extended periods of time it will develop crazing cracks which eventually will lead to premature failure.The ten-year design life of the vessel requires that the long-term tensile stresses not exceed4MPa.In order to reduce further the tensile stresses,it was decided to place the vessel in compression during normal operation by adjusting the H2O level with respect to the D2O level.Optimization of the design was car-ried out with the ANSYS[16]finite-element analysis code.The structure was studied under a variety of simulated conditions,both normal and abnormal (e.g.,with a broken suspension rope).For all these studies it was assumed that the mechanical properties of the bonds were identical to that of the acrylic,Table2Principal features of the containment vessel(all numbers are at20◦C).Capacity of sphere12.01mNominal wall thickness30.0tonnesCapacity of chimney(normal operation)1.46mChimney height2.53tonnesBulk absorption coeff.of light in acrylic.04cm−1@360nm79MPaTensile elongation3.5GPaCompressive deformation1.22%an assumption supported by measurements of bonded test specimens.The Polycast Corp.measured mechanical properties of the acrylic panels to insure that each sheet met design specifications for thickness and mechanical integrity.The averages offive measurements are listed in Table3.The low percentage of residual monomer listed in the table indicates that the polymer-ization process is essentially complete and that the mechanical properties of the acrylic will not change.Stress levels in bonded panels were also recorded by positioning crossed po-larisers on each side of the bond and photographing fringes.At the end,the vessel was successfully proof-tested by reducing the internal pressure7kPa below ambient pressure to subject the sphere to buckling forces and by pres-surizing it to14kPa above ambient pressure to subject all the bonds to tensile stresses.The radioactive and optical requirements for the acrylic were established by Monte Carlo simulation of the detector.To ensure that the acrylic wouldmeet the requirements of the detector,the radioactive properties were mea-sured throughout the supply of materials by means of neutron activation, mass spectrometry,and alpha spectroscopy.Concentrations of Th and U in each acrylic sample were measured to be less than the specified1.1pg/g each. For the eleven ropes produced,twelve2.5-kg samples were taken from the beginning and end of a production run and between each rope.Direct gamma counting yielded upper limits of200pg/g Th and U,in agreement with the average value of small-sample neutron-activation results.Optical absorption coefficients for each production batch of acrylic sheets were measured for samples cut from the sheets.Over300measurements were per-formed.A usefulfigure of merit is the ratio of the light detected with acrylic to the light detected without acrylic between300nm and440nm at normal incidence.Thefigure of merit for the ten thicker equatorial rope groove panels was calculated from the absorption coefficient as if they were5.6cm thick. For the170sheets manufactured the averagefigure of merit is0.73.4Photomultiplier TubesThe primary design considerations for the PMT system are•High photon detection efficiency,•Minimal amount of radioactivities in all components,<120ng/g U,< 90ng/g Th,<0.2mg/g K,•Low failure rate for a10-year lifespan submerged in ultrapure water at a pressure of200kPa and for the seismic activity expected at the SNO site,•Fast anode pulse rise time and fall time and low photoelectron transit time spread,for a single-photoelectron timing resolution standard deviation< 1.70ns,•Low dark current noise rate,<8kHz,at a charge gain of107,•Operating voltage less than3000V,•Reasonable charge resolution,>1.25peak to valley,•Low prepulse,late-pulse and after-pulse fractions,<1.5%,•Low sensitivity of PMT parameters to external magneticfield:at100mG, less than10%gain reduction and less than20%timing resolution degrada-tion.Raw materials from the manufacturers of the PMT components,bases,cables, and housings were assayed for radioactivity[17].The leach rates of different types of glass and plastic were also measured.The photomultiplier tubes(PMTs)are immersed in ultrapure water to a max-imum depth of22m,corresponding to a maximum water pressure of200kPa。
Spotlight SAR data focusing based on a two-step processing approach
Spotlight SAR Data Focusing Based on a Two-StepProcessing ApproachRiccardo Lanari,Senior Member,IEEE,Manlio Tesauro,Eugenio Sansosti,Member,IEEE,and Gianfranco FornaroAbstract—We present a new spotlight SAR data-focusing algo-rithm based on a two-step processing strategy that combines the advantages of two commonly adopted processing approaches:the efficiency of SPECAN algorithms and the precision of stripmap fo-cusing techniques.The first step of the proposed algorithm imple-ments a linear and space-invariant azimuth filtering that is carried out via a deramping-based technique representing a simplified ver-sion of the SPECAN approach.This operation allows us to perform a bulk azimuth raw data compression and to achieve a pixel spacing smaller than(or equal to)the expected azimuth resolution of the fully focused image.Thus,the azimuth spectral folding phenom-enon,typically affecting the spotlight data,is overcome,and the space-variant characteristics of the stripmap system transfer func-tion are preserved.Accordingly,the residual and precise focusing of the SAR data is achieved by applying a conventional stripmap processing procedure requiring a minor modification and imple-mented in the frequency domain.The extension of the proposed technique to the case of high bandwidth transmitted chirp signals is also discussed.Experiments carried out on real and simulated data confirm the validity of the presented approach,which is mainly focused on spaceborne systems.Index Terms—Raw data focusing,spectral analysis(SPECAN) processing algorithms.I.I NTRODUCTIONS YNTHETIC aperture radar(SAR)spotlight mode allows the generation of microwave images with high geometric resolutions[1],[2].This result is achieved by steering the radar antenna beam,during the raw data acquisition interval,to al-ways illuminate the same area on the ground(spot).Accord-ingly,from each target located in the lighted area,a large number of backscattered echoes is received,and their coherent combina-tion allows to obtain the required azimuth resolution.Similarly, high resolution in the range direction is achieved by transmitting a high bandwidth chirp followed by a further data processing on each received echo.The first algorithms proposed for spotlight raw data pro-cessing are based on the similarity between spotlight SAR systems and computer tomography:they are usually referredManuscript received March30,2000;revised November29,2000.This work was partially supported by the Italian Space Agency,Roma,Italy.The spotlight SIR-C data have been processed at the Jet Propulsion Laboratory,Pasadena,CA. nari,E.Sansosti,and G.Fornaro are with the Istituto di Ricerca per l’Elettromagnetismo e i Componenti Elettronici(IRECE)328I-80124 Napoli,Italy,(e-mail:lanari@r.it;sansosti@r.it; fornaro@r.it).M.Tesauro is with the Dipartimento di Ingegneria dell’Innovazione,Univer-sitàdegli Studi di Lecce,I-73100Lecce,Italy(e-mail:manlio.tesauro@unile.it). Publisher Item Identifier S0196-2892(01)07625-2.to as polar format and convolution backprojection techniques [3]–[5].The former are computationally efficient but request a nontrivial interpolation step from a polar to rectangular grid: the image quality can be therefore affected by uncompensated range curvature effects[6]and interpolation errors.The latter allow overcoming these limitations but are generally inefficient if implementations on dedicated architectures are not consid-ered[5].Most recently,the development of spotlight raw data processing algorithms based on stripmap mode focusing techniques operating in the frequency domain has received increasing interest[7]–[10].Indeed,strip-mode processing procedures that are precise,efficient,and requiring less stringent approximations(compared to those involved in the tomographic approaches)are available[11]–[13].However,a relevant limitation to the straightforward application of these techniques to the spotlight data processing is represented by the fact that the raw signal azimuth bandwidth is,in the spotlight case,generally greater(often much greater)than the azimuth sampling frequency,referred to as pulse repetition frequency(prf).As a consequence,data processing carried out in the Fourier domain,as that involved in efficient strip-mode focusing,cannot be directly implemented on the full aperture because of the consequential azimuth spectrum folding effect.A way to overcome this limitation is based on partitioning the received signal into azimuth blocks whose block-bandwidths are smaller than the sampling frequency.Standard strip mode focusing techniques are then applied to each data block and the processed signals are then combined to generate the fully-resolved spotlight image[9],[10].Completely different pro-cessing solutions,based on a nontrivial reconstruction of the unfolded azimuth spectrum from the folded one associated to the raw signal,are also available[7],[8].On the other hand,a relatively simple spotlight processing al-gorithm can be implemented by applying the spectral analysis (SPECAN)technique[14].In this case,the received raw data are azimuth focused via the application of a deramping func-tion(a multiplication by a properly chosen chirp signal)fol-lowed by a final azimuth FT operation.The azimuth deramping factor is updated in range to allow for the compensation of the space-varying characteristic of the received data due to the(az-imuth)chirp rate range variation(focus depth).This procedure is attractive as far as computational efficiency and capability to overcome the azimuth spectral folding effect are concerned. However,its main limitation is represented by the lack of a pre-cise range cell migration(RCM)compensation that is often rel-evant in spotlight mode SAR systems due to high resolution re-quirements.0196–2892/01$10.00©2001IEEEIn this paper,we propose an alternative spotlight data fo-cusing technique based on decoupling the overall focusing oper-ation in two main steps.The key point of the proposed approach is to combine the advantages of efficient SPECAN and precise stripmap focusing approaches.In particular,the first processing step carries out a filtering operation aimed to achieve a bulk az-imuth raw data compression and an output pixel spacing smaller than(or equal to)the expected final azimuth resolution.Similar to SPECAN processing algorithms,this filtering operation is ef-ficiently carried out via a deramping-based approach[14]but, at variance of the former,the chirp rate of the deramping func-tion is kept constant and properly fixed at a convenient value. This is a key point in the proposed processing procedure that al-lows preserving the space variant characteristic of the residual system transfer function(STF).A discussion on the impact of the chirp rate selection on possible artifacts that may appear at the image borders is also provided.The second processing step carries out the residual focusing of the data via the use of a conventional stripmap processing pro-cedure implemented in the frequency domain and requiring only minor modifications in the available codes.This spectral do-main focusing operation is now possible because,following the bulk azimuth compression,the folding effect of the raw signal azimuth spectrum has been totally overcome.More precisely, this second(residual)processing step performs the precise RCM compensation,the data range compression and the residual az-imuth data compression;the latter accounts for higher order terms not compensated in the bulk azimuth processing step.The minor modifications to be performed in available stripmap pro-cessing codes are essentially a change of the azimuth filter func-tion,which accounts for the already compensated quadratic az-imuth phase term,and a change in the azimuth pixel spacing of the input data.It is worth noting the role that the bulk azimuth compres-sion operation plays in our approach to a preprocessing step that extends the processing capability of conventional stripmap fo-cusing procedures to spotlight data.In addition,the proposed processing algorithm does not require a specific manipulation and/or interpolation of the data,such as those necessary in az-imuth block divisions or in unfolded signal spectrum reconstruc-tion-based algorithms.Accordingly,we have finally achieved a processing procedure that is simple,precise and computation-ally efficient because it does not imply any significant increase of the raw data matrix dimensions and only includes fast Fourier transforms(FFTs)and matrix multiplication.Moreover,it can be easily extended to the case of high bandwidth transmitted sig-nals wherein spectral folding effects could appear in the range direction as well.In our case,the implemented solution is based again on a deramping approach,that is,at variance of conven-tional focusing techniques performed following the A/D con-version rather than before.A number of experiments carried out on a simulated and a real data set,the latter acquired by the experimental C-band sensor of the SIR-C system during the SIR-C/X-SAR mission in1994[9],demonstrate the validity of the presented approach.As a final remark,we want to stress that the presented anal-ysis is focused on spaceborne systems typically characterized by small squint angles[15]during the acquisition(often lessthan Fig.1.Spotlight system geometry.1-axis,assumed coincident with the platform flight path,is referred to as azimuthdirection are the(closest approach)target range and look angle,respectively.1We assume in the following that the sensor,mounted onboard a platform moving at the constantvelocity,transmits,attimes(1) whereangular carrierfrequency;chirp rate,beingis the systemwavelength,,the two-way antenna pattern factor,and being the azimuth dimension of the real,onboard antenna.Note that the assumed simplificationon allows avoiding the antenna footprint dependence on the platform location whose impact is 1Note that we have assumed the platform trajectory to be a straight line which is appropriate for airborne but not for spaceborne sensors.However,it can be shown that spaceborne data can be processed in the same manner as airborne data if the closed approach distance and the azimuth velocity are properly con-sidered[16]or,more precisely,via the appropriate sensor-target distance eval-uation[17].LANARI et al.:SPOTLIGHT SAR DATA FOCUSING 1995inessential for the following analysis.A more detailed discus-sion on this matter can be found in [10].Let us now consider a pointtargetandreceived onboard is represented,after the heterodyne process [that removes the fast varyingterm(4a)FTis the azimuth (spatial)frequency.Equation (7)shows that theazimuth spectrum is centered on thefrequencyand that the signal bandwidthiswith respect to the strip mode case,for whichit wouldbe.In this case,weget(8)Since the maximum valueof,i.e.,that relative to the nearest range,should be con-sidered.This is assumed hereafter,although we underline that in the spotlight case,due to the typically limited range extension of the illuminated spot,the range dependenceof (9)in order to avoid any azimuth spectral folding effect [18].On the other hand,this sampling frequency increase would lead to large data rates and could generate severe range ambiguity problems [15].Accordingly,the valuesofand are the raw data and thefocused image azimuth pixel dimensions,respectively,the latter chosen in agreement with the Nyquist limit available from (8).In this case,we get from(9)(10)1996IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING,VOL.39,NO.9,SEPTEMBER2001 Equation(10)clarifies that the azimuth number of pixel inthe raw data set and in the focused image,i.e.,,respectively,are comparable,and they become closeras.III.B ULK A ZIMUTH R AW D ATA C OMPRESSIONLet us now investigate a possible solution to the azimuth spec-tral folding effect discussed in the previous section.The pro-posed approach is based on a linear and space-invariant azimuthfiltering operation that performs a bulk azimuth data compres-sion and achieves an output pixel spacing,satisfying the Nyquistlimit shown in(8).This operation is efficiently implementedwithout any large zero padding step,via a deramping-basedtechnique[1],[2],[14].A.Continuous Domain AnalysisKey point of the presented technique is the azimuth convolu-tion between the raw data and the quadratic phasesignal(11)whereinand are the nearest and the farthest ranges of theilluminated spot,respectively,and represents the range valueof a generic point target located within the spot.No specific as-sumption has yet been made on the factor in(11),althoughwe anticipate that the impact of any particular selection for thisterm is later discussed in detail.We also underline that the rawsignal range component,accounting for the range independent(RI)and range dependent(RD)RCM effects,is neglected inthe azimuth convolution operation presented in this section.Allthese components are restored and accounted for during the sub-sequent and highly precise second processing step.The azimuth convolution between thesignal in(5),forthe case of an isolated target,and thefunction in(11)gives(12)wherein thesymbolof the target and on the valueof.The second line in(12)shows that this azimuth convo-lution is essentially a deramping based(SPECAN)processing,involving a chirp multiplication of the azimuth signal,a subse-quent FT and a residual phase cancellation.Indeed,but for theabove mentioned approximations,this processing step allows usto achieve an azimuth compression which is full only for thosetargets locatedat.This point can be clarified by recon-sidering(12).Indeed,if weassume(13)wherein the imaged target is fully azimuth focused.Forany,by assuming the validity of theSPM method3weget,for which the resulting signal is centeredaround andextendsfor.However,because the range ex-tension of the spot area is typically very small,we canassumeand,even in this limiting case,a compression effect,althoughpartial with respect to that achieved in(13),is obtained.The obtained results apply to the case of an isolated target,however they can be easily extended to the case of an illumi-nated area.Accounting for the azimuth spotdimensionwith(16)3Generalization to those cases where SPM cannot be applied can be derivedas in[19].Here we are interested only in having a rough measure of the targetecho extension following the first processing step.LANARI et al.:SPOTLIGHT SAR DATA FOCUSING 1997with ,i.e.,with a pixel spacing satisfying the Nyquist limit of the spotlight signal,see (8)and (9).Accordingly,sim-ilarly to what is shown in the continuous analysis presented in the previous section,(12)becomesbeing the nearest integer operator.Note that,dueto(10),with (18)where thefactorrepresents the output azimuth data repli-cation.Accordingly,under the validity of the inequality in (18),not only the azimuth spectral folding effects are avoided,see (16),but also no data wrap around occurs in the azimuth direc-tion.We note that the validity of the aforementioned inequality in (18)is generally satisfied due to the presence of a slight az-imuth data oversampling carried out on the spotlight signal with respect to the Nyquist rate that we would have with the system operating in the stripmap mode.This point can be clarified by accounting for (15)in the inequality in (18).In this case,wegetgives a value ofaboutfor the right-hand side factor in (21).This leads tothe newinequality,which is satisfied for most real spotlight SAR systems.Of course in the (rare)case of an insufficient oversampling factor,a balancing choice would be represented by setting at the midrange swath,thus leading to a resolution degradation at the image near and far range edges.Anyway,we remark that this is generally not a very critical issue because,due to the antenna beam steering,those targets would be in any case characterized by a lower resolution [10].Based on (18),we can finally rewrite (17)asfollows:of the orderofis required and implemented via the substitu-tionin (11)[and equivalently in (17)]to compen-sate for this effect.Secondly,due to the appearance of a sig-nificant range walk effect [15]in the RCM,an additional edge degradation could appear even at midrange.Although the paper is focused on low squint angle acquisitions,we stress that well known procedures applied for mitigating the range walk effect in deramping-based focusing approaches could be considered [20].However,this is worth pursuing for future studies.4Notealso that,at variance with what is shown in (22),a more conventionalexpression of the DFT operation can be consideredimplying1with n =0P=2;...;P=201[18].Inthis case,a trivial manipulation of (22)is required.1998IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING,VOL.39,NO.9,SEPTEMBER2001 IV.R ESIDUAL D ATA F OCUSING VIA S TRIPMAPP ROCESSING T ECHNIQUESLet us concentrate on the result of the bulk azimuth com-pression step shown in(12).We underline that,following thisoperation,the folding effect influencing the azimuth spectrumis avoided and the space-variant characteristics of the systemtransfer function are maintained.Accordingly,it is possible tocarry out the residual focusing of the data via the use of effi-cient and precise techniques originally designed for stripmapSAR data focusing that are implemented in the frequency do-main.To clarify this point,we refer to the expression of the receivedsignal over a distributed scene by resorting the linearity of thesystemrepresents the reflectivity function of the illumi-nated scene including the fast varying phase term in(24).Thereceived data spectrum can be written asfollows:is the range(spatial)frequency,andcan be found via the application ofthe SPM,leading to the following expression[15]:(27)where,and.In this case,wehaveFT(29)wherein the phasefactor accounts for thebulk compression step.By finally substituting(27)in(29),weget(30)withinsteadof[15].Moreover,the folding effects influencing the azimuth raw signalspectrum(see Section II)have been avoided due to the alreadycarried out bulk azimuth compression step leading to the newpixelspacing shown in(18).We also note that the space-variant characteristics of the system transfer function are pre-served by the bulk compression.This nonlinear mapping of therange frequencies,i.e.,bysimply accounting for the system transfer functioncomponentinsteadof and by considering the new azimuth sam-pling frequency.In particular,we have considered the stripmap processing ap-proach described in[12],and the overall processing block dia-gram is shown in Fig.2.In this case,the first step carries out thebulk azimuth compression,while the residual focusing is im-plemented as follows:the filtering operation,carried out in thetwo-dimensional(2-D)frequency domain via the filterfunctionallows us to fully focus the midspot area by accountingLANARI et al.:SPOTLIGHT SAR DATA FOCUSING1999Fig. 2.Two-step focusing procedure block diagram.Note that i=0P=2;...;P=201and l=0M=2;...;M=201.Moreover,1=(P1x)and1=(M1ri i in i l i l inr r i l lo re i rara r l l i in i ir i ra r i i ira r i r i im l inis r i i la r in l ii r i l r if l id i x r ii r r ic l x r i re ii i2000IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING,VOL.39,NO.9,SEPTEMBER2001 Fig.4.C-band VV-polarized image of the Sidney zone obtained by applyingthe focusing approach of Fig.5to the raw data set acquired in1994by theSIR-C system operating in an experimental spotlight mode.The expectedazimuth resolution is about1m,but the image is represented with an azimuthpixel spacing of about6.5m to avoid the geometric distortions caused bydifferent dimensions of the pixel in range and azimuth directions.The extensionof the area is of about1.7km24.5km.range compressionof(zeropadded to increase its extensionfrom)and thesignalin(31),becomingLANARI et al.:SPOTLIGHT SAR DATA FOCUSING2001Fig.5.Simulated image obtained after the bulk azimuth compression(rangecompression has been also implemented).The range corresponding to~r ishighlighted.Clearly,although not explicitly mentioned,the range pixelspacing resulting from the range focusing operation of(33)must also be considered for the implementation of theresidual focusing step.As final remarks,we underline that all the operations involvedin(33)are assumed,in our case,to be carried out after the A/Dconversion in the receiver.Moreover,the computational effi-ciency of the procedure in Fig.3can be further improved bycombining the range compression operation and the compen-sation of the scaling factor2002IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING,VOL.39,NO.9,SEPTEMBER 2001TABLE IIR ESULTSOF THEA NALYSIS C ARRIED O UT ONTHE I MAGED P OINTT ARGETS OF F IG .61994by the C-band sensor of the SIR-C system operated in the experimental spotlight mode (see Table I for a description of the system parameters).In this case,because of theratiokm,andthe near,mid,and far range distancesarekm,km,and km,respectively.Theselected value ofiskm.Accordingly,based on the analysis of Section III,we can evaluate the minimum andmaximum range distance,forexampleand ,which ensure the absence of degradation at the edges of the image,by using (20).They are givenbyKmand(36)thus guaranteeing the possibility of focusing the overall scene.The image obtained by applying the procedure of Fig.2is pre-sented in Fig.4.It clearly shows the focusing capability of the proposed algorithm.However,the absence of known reference targets in the scene does not allow any significant quantitative measurement of the quality of the obtained image.Accordingly,in order to assess the performance of the proposed approach,we have generated a simulated data set representing the signal backscattered by a sequence of three point targets aligned in the range direction and located over an absorbing background.The system parameters are again those of Table I.To better clarify the effect of the bulk azimuth compression step,we show the result obtained by applying this operation (see Fig.5).As expected,the achieved azimuth compression 5effect is more relevant for the target located at a range closer to .We additionally remark that the azimuth extension of the bulk com-pressed data is of 2048samples,and it has been increased 6with respect to the raw data,by about 20%(the azimuth raw data length was of 1700samples),but no additional data dimension increase is required in the residual focusing step.Note also in5Inorder to improve the readability of the result,a range compression step has been also carried out.6This allowed the use of high efficient FFT codes with a power of two data lengths[18].Fig.7.High resolution simulated image obtained by applying the focusing procedure of Fig.3.The contour plots of the three imaged point targets are also shown.Fig.5the effect of the uncompensated range cell migration ef-fect.The fully focused image is finally shown in Fig.6.The results of the measurements carried out on the imaged point targets of Fig.6are summarized in Table II wherein the theoretical az-imuth resolution values are those pertinent to the selected point reflector.The inspection of Table II clarifies the high perfor-mance of the presented technique for what concerns the ampli-tude characteristics of the target responses.The phase accuracy has been also assessed;it is about 1LANARI et al.:SPOTLIGHT SAR DATA FOCUSING2003 TABLE IIIR ESULTS OF THE A NALYSIS C ARRIED O UT ON THE I MAGED P OINTT ARGETS OF F IG.712004IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING,VOL.39,NO.9,SEPTEMBER 2001different foreign research institutes such as the Institute of Space and Astronau-tical Science (ISAS),Tokyo,Japan,the German Aerospace Research Establish-ment (DLR),Oberpfafenhoffen,Germany,and the Jet Propulsion Laboratory (JPL),Pasadena,CA,where he received a NASA recognition for the innovative development of a ScanSAR processor for the SRTM mission.His main research activities are in the SAR data processing field as well as in IFSAR techniques.On this topic,he has authored 30international journal papers and,more recently,a book Synthetic Aperture Radar Processing (Boca Raton,FL:CRC).He also holds two patents on SAR raw data processing techniques.nari has been Chairman at several international conferences and was invited to join the technical program committee for the IGARSS Conference in 2000and2001.Manlio Tesauro received the Laurea degree (summa cum laude)in electronic engineering and the Ph.D.degree in electronic engineering and computer sci-ence,both from the University of Napoli “Federico II,”Napoli,Italy,in 1992and 1998,respectively.In 1998and 1999,he was with the Istituto di Ricerca per l’Elettromagnetismo ed I Componenti Elettronici (IRECE),Napoli,National Research Council (CNR),with a grant from Telespazio.Since 2000,he has been a Research Scientist with the Dipartimento di Ingegneria dell’Innovazione,University of Lecce,Lecce,Italy.In February 2000,he was a member of the Italian Team in the ASI Ground Data Processing Chain during the Shuttle Radar Topography Mission (SRTM)at the Jet Propulsion Laboratory,Pasadena,CA.His main interests are in the field of statistical signal processing with emphasis on SAR and IFSARprocessing.Eugenio Sansosti (M’96)received the Laurea degree (summa cum laude)in electronic engineering from the University of Napoli “Federico II,”Napoli,Italy,in 1995.Since 1997,he has been with the Istituto di Ricerca per l’Elettromagnetismo e I Componenti Elettronici (IRECE),National Research Council (CNR),where he currently holds a Full Researcher position.He is also an Adjunct Professor of electrical Communica-tions at the University of Cassino,Cassino,Italy.He was a Guest Scientist with the Jet Propulsion Labora-tory,Pasadena,CA,from August 1997to February 1998,and again in February 2000in support of the NASA Shuttle Radar Topography Mission.In November and December 2000,he worked as an Image Processing Adviser at the Istituto Tecnologico de Aeronautica (ITA),Sao Josédos Campos SP,Brazil.His main research interests are in airborne and spaceborne synthetic aperture radar (SAR)data processing,SAR interferometry,and differential SARinterferometry.Gianfranco Fornaro received the Laurea degree in electronic engineering from the University of Napoli “Federico II,”Napoli,Italy,in 1992,and the Ph.D.degree from the University of Rome “La Sapienza,”Rome,Italy,in 1997.He is currently a Full Researcher at the Istituto di Ricerca per l’Elettromagnetismo e i Componenti Elettronici (IRECE),Italian National Research Council (CNR)and Adjunct Professor of Communi-cation,University of Cassino,Cassino,Italy.He has been a Visiting Scientist with the German AerospaceEstablishment (DLR),Oberpfafenhoffen,Germany,and the Politecnico de Milano,Milano,Italy,and has been a Lecturer with the Istituto Tecnologico de Aeronautica (ITA),Sao Josédos Campos SP,Brasil.His main research interests are in the signal processing field with applications to the synthetic aperture radar (SAR)data processing,SAR interferometry,and differential SAR interferometry.Dr.Fornaro was awarded the Mountbatten Premium Award by the Institution of Electrical Engineers (IEE)in 1997.。
基于自转一阶非连续式微球双平盘研磨的运动学分析与实验研究
第53卷第8期表面技术2024年4月SURFACE TECHNOLOGY·133·基于自转一阶非连续式微球双平盘研磨的运动学分析与实验研究吕迅1,2*,李媛媛1,欧阳洋1,焦荣辉1,王君1,杨雨泽1(1.浙江工业大学 机械工程学院,杭州 310023;2.新昌浙江工业大学科学技术研究院,浙江 绍兴 312500)摘要:目的分析不同研磨压力、下研磨盘转速、保持架偏心距和固着磨料粒度对微球精度的影响,确定自转一阶非连续式双平面研磨方式在加工GCr15轴承钢球时的最优研磨参数,提高微球的形状精度和表面质量。
方法首先对自转一阶非连续式双平盘研磨方式微球进行运动学分析,引入滑动比衡量微球在不同摩擦因数区域的运动状态,建立自转一阶非连续式双平盘研磨方式下的微球轨迹仿真模型,利用MATLAB对研磨轨迹进行仿真,分析滑动比对研磨轨迹包络情况的影响。
搭建自转一阶非连续式微球双平面研磨方式的实验平台,采用单因素实验分析主要研磨参数对微球精度的影响,得到考虑圆度和表面粗糙度的最优参数组合。
结果实验结果表明,在研磨压力为0.10 N、下研磨盘转速为20 r/min、保持架偏心距为90 mm、固着磨料粒度为3000目时,微球圆度由研磨前的1.14 μm下降至0.25 μm,表面粗糙度由0.129 1 μm下降至0.029 0 μm。
结论在自转一阶非连续式微球双平盘研磨方式下,微球自转轴方位角发生突变,使研磨轨迹全覆盖在球坯表面。
随着研磨压力、下研磨盘转速、保持架偏心距的增大,微球圆度和表面粗糙度呈现先降低后升高的趋势。
随着研磨压力与下研磨盘转速的增大,材料去除速率不断增大,随着保持架偏心距的增大,材料去除速率降低。
随着固着磨料粒度的减小,微球的圆度和表面粗糙度降低,材料去除速率降低。
关键词:自转一阶非连续;双平盘研磨;微球;运动学分析;研磨轨迹;研磨参数中图分类号:TG356.28 文献标志码:A 文章编号:1001-3660(2024)08-0133-12DOI:10.16490/ki.issn.1001-3660.2024.08.012Kinematic Analysis and Experimental Study of Microsphere Double-plane Lapping Based on Rotation Function First-order DiscontinuityLYU Xun1,2*, LI Yuanyuan1, OU Yangyang1, JIAO Ronghui1, WANG Jun1, YANG Yuze1(1. College of Mechanical Engineering, Zhejiang University of Technology, Hangzhou 310023, China;2. Xinchang Research Institute of Zhejiang University of Technology, Zhejiang Shaoxing 312500, China)ABSTRACT: Microspheres are critical components of precision machinery such as miniature bearings and lead screws. Their surface quality, roundness, and batch consistency have a crucial impact on the quality and lifespan of mechanical parts. Due to收稿日期:2023-07-28;修订日期:2023-09-26Received:2023-07-28;Revised:2023-09-26基金项目:国家自然科学基金(51975531)Fund:National Natural Science Foundation of China (51975531)引文格式:吕迅, 李媛媛, 欧阳洋, 等. 基于自转一阶非连续式微球双平盘研磨的运动学分析与实验研究[J]. 表面技术, 2024, 53(8): 133-144.LYU Xun, LI Yuanyuan, OU Yangyang, et al. Kinematic Analysis and Experimental Study of Microsphere Double-plane Lapping Based on Rotation Function First-order Discontinuity[J]. Surface Technology, 2024, 53(8): 133-144.*通信作者(Corresponding author)·134·表面技术 2024年4月their small size and light weight, existing ball processing methods are used to achieve high-precision machining of microspheres. Traditional concentric spherical lapping methods, with three sets of circular ring trajectories, result in poor lapping accuracy. To achieve efficient and high-precision processing of microspheres, the work aims to propose a method based on the first-order discontinuity of rotation for double-plane lapping of microspheres. Firstly, the principle of the first-order discontinuity of rotation for double-plane lapping of microspheres was analyzed, and it was found that the movement of the microsphere changed when it was in different regions of the upper variable friction plate, resulting in a sudden change in the microsphere's rotational axis azimuth and expanding the lapping trajectory. Next, the movement of the microsphere in the first-order discontinuity of rotation for double-plane lapping method was analyzed, and the sliding ratio was introduced to measure the motion state of the microsphere in different friction coefficient regions. It was observed that the sliding ratio of the microsphere varied in different friction coefficient regions. As a result, when the microsphere passed through the transition area between the large and small friction regions of the upper variable friction plate, the sliding ratio changed, causing a sudden change in the microsphere's rotational axis azimuth and expanding the lapping trajectory. The lapping trajectory under different sliding ratios was simulated by MATLAB, and the results showed that with the increase in simulation time, the first-order discontinuity of rotation for double-plane lapping method could achieve full coverage of the microsphere's lapping trajectory, making it more suitable for precision machining of microspheres. Finally, based on the above research, an experimental platform for the first-order discontinuity of rotation for double-plane lapping of microsphere was constructed. With 1 mm diameter bearing steel balls as the processing object, single-factor experiments were conducted to study the effects of lapping pressure, lower plate speed, eccentricity of the holding frame, and grit size of fixed abrasives on microsphere roundness, surface roughness, and material removal rate. The experimental results showed that under the first-order discontinuity of rotation for double-plane lapping, the microsphere's rotational axis azimuth underwent a sudden change, leading to full coverage of the lapping trajectory on the microsphere's surface. Under the lapping pressure of 0.10 N, the lower plate speed of 20 r/min, the eccentricity of the holder of 90 mm, and the grit size of fixed abrasives of 3000 meshes, the roundness of the microsphere decreased from 1.14 μm before lapping to 0.25 μm, and the surface roughness decreased from 0.129 1 μm to 0.029 0 μm. As the lapping pressure and lower plate speed increased, the microsphere roundness and surface roughness were firstly improved and then deteriorated, while the material removal rate continuously increased. As the eccentricity of the holding frame increased, the roundness was firstly improved and then deteriorated, while the material removal rate decreased. As the grit size of fixed abrasives decreased, the microsphere's roundness and surface roughness were improved, and the material removal rate decreased. Through the experiments, the optimal parameter combination considering roundness and surface roughness is obtained: lapping pressure of 0.10 N/ball, lower plate speed of 20 r/min, eccentricity of the holder of 90 mm, and grit size of fixed abrasives of 3000 meshes.KEY WORDS: rotation function first-order discontinuity; double-plane lapping; microsphere; kinematic analysis; lapping trajectory; lapping parameters随着机械产品朝着轻量化、微型化的方向发展,微型电机、仪器仪表等多种工业产品对微型轴承的需求大量增加。
community structure in time-dependent,multiscale,and multiplex networks
DOI: 10.1126/science.1184819, 876 (2010);328 Science , et al.Peter J. Mucha NetworksCommunity Structure in Time-Dependent, Multiscale, and MultiplexThis copy is for your personal, non-commercial use only.clicking here.colleagues, clients, or customers by , you can order high-quality copies for your If you wish to distribute this article to othershere.following the guidelines can be obtained by Permission to republish or repurpose articles or portions of articles): December 2, 2010 (this infomation is current as of The following resources related to this article are available online at/content/329/5989/277.3.full.html A correction has been published for this article at:/content/328/5980/876.full.html version of this article at:including high-resolution figures, can be found in the online Updated information and services,/content/suppl/2010/05/13/328.5980.876.DC1.html can be found at:Supporting Online Material /content/328/5980/876.full.html#related found at:can be related to this article A list of selected additional articles on the Science Web sites /content/328/5980/876.full.html#ref-list-1, 3 of which can be accessed free:cites 19 articles This article /content/328/5980/876.full.html#related-urls 1 articles hosted by HighWire Press; see:cited by This article has been/cgi/collection/comp_math Computers, Mathematicssubject collections:This article appears in the following registered trademark of AAAS.is a Science 2010 by the American Association for the Advancement of Science; all rights reserved. The title Copyright American Association for the Advancement of Science, 1200 New York Avenue NW, Washington, DC 20005. (print ISSN 0036-8075; online ISSN 1095-9203) is published weekly, except the last week in December, by the Science o n D e c e m b e r 2, 2010w w w .s c i e n c e m a g .o r g D o w n l o a d e d f r o mCommunity Structure inTime-Dependent,Multiscale,and Multiplex NetworksPeter J.Mucha,1,2*Thomas Richardson,1,3Kevin Macon,1Mason A.Porter,4,5Jukka-Pekka Onnela 6,7Network science is an interdisciplinary endeavor,with methods and applications drawn from across the natural,social,and information sciences.A prominent problem in network science is the algorithmic detection of tightly connected groups of nodes known as communities.We developed a generalized framework of network quality functions that allowed us to study the community structure of arbitrary multislice networks,which are combinations of individual networks coupled through links that connect each node in one network slice to itself in other slices.This framework allows studies of community structure in a general setting encompassing networks that evolve over time,have multiple types of links (multiplexity),and have multiple scales.The study of graphs,or networks,has a long tradition in fields such as sociology and mathematics,and it is now ubiquitous in academic and everyday settings.An important tool in network analysis is the detection of mesoscopic structures known as communities (or cohesive groups),which are defined intuitively as groups of nodes that are more tightly connected to each other than they are to the rest of the network (1–3).One way to quantify communities is by a quality function that compares the number of intracommunity edges to what one would expect at random.Given the network adjacency matrix A ,where the element A ij details a direct connection between nodes i and j ,one can construct a qual-ity function Q (4,5)for the partitioning of nodes into communities as Q =∑ij (A ij −P ij )d (g i ,g j ),where d (g i ,g j )=1if the community assignments g i and g j of nodes i and j are the same and 0otherwise,and P ij is the expected weight of the edge between i and j under a specified null model.The choice of null model is a crucial con-sideration in studying network community struc-ture (2).After selecting a null model appropriate to the network and application at hand,one can use a variety of computational heuristics to assign nodes to communities to optimize the quality Q (2,3).However,such null models have not been available for time-dependent networks;analyses have instead depended on ad hoc methods topiece together the structures obtained at different times (6–9)or have abandoned quality functions in favor of such alternatives as the Minimum Description Length principle (10).Although tensor decompositions (11)have been used to cluster network data with different types of connections,no quality-function method has been developed for such multiplex networks.We developed a methodology to remove these limits,generalizing the determination of commu-nity structure via quality functions to multislice networks that are defined by coupling multiple adjacency matrices (Fig.1).The connections encoded by the network slices are flexible;they can represent variations across time,variations across different types of connections,or even community detection of the same network at different scales.However,the usual procedure for establishing a quality function as a direct count of the intracommunity edge weight minus thatexpected at random fails to provide any contribu-tion from these interslice couplings.Because they are specified by common identifications of nodes across slices,interslice couplings are either present or absent by definition,so when they do fall inside communities,their contribution in the count of intra-community edges exactly cancels that expected at random.In contrast,by formulating a null model in terms of stability of communities under Laplacian dynamics,we have derived a principled generaliza-tion of community detection to multislice networks,1Carolina Center for Interdisciplinary Applied Mathematics,Department of Mathematics,University of North Carolina,Chapel Hill,NC 27599,USA.2Institute for Advanced Materials,Nanoscience and Technology,University of North Carolina,Chapel Hill,NC 27599,USA.3Operations Research,North Carolina State University,Raleigh,NC 27695,USA.4Oxford Centre for Industrial and Applied Mathematics,Mathematical Institute,University of Oxford,Oxford OX13LB,UK.5CABDyN Complexity Centre,University of Oxford,Oxford OX11HP,UK.6Department of Health Care Policy,Harvard Medical School,Boston,MA 02115,USA.7Harvard Kennedy School,Harvard University,Cambridge,MA 02138,USA.*To whom correspondence should be addressed.E-mail:mucha@1234Fig.1.Schematic of a multislice network.Four slices s ={1,2,3,4}represented by adjacencies A ijs encode intraslice connections (solid lines).Interslice con-nections (dashed lines)are encoded by C jrs ,specifying the coupling of node j to itself between slices r and s .For clarity,interslice couplings are shown for only two nodes and depict two different types of couplings:(i)coupling between neighboring slices,appropriate for ordered slices;and (ii)all-to-all interslice coupling,appropriate for categoricalslices.n o d e sresolution parameterscoupling = 0123451015202530n o d e s resolution parameterscoupling = 0.1123451015202530n o d e sresolution parameterscoupling = 1123451015202530Fig. 2.Multislice community detection of the Zachary Karate Club network (22)across multiple resolutions.Colors depict community assignments of the 34nodes (renumbered vertically to group similarly assigned nodes)in each of the 16slices (with resolution parameters g s ={0.25,0.5,…,4}),for w =0(top),w =0.1(middle),and w =1(bottom).Dashed lines bound the communities obtained using the default resolution (g =1).14MAY 2010VOL 328SCIENCE876CORRECTED 16 JULY 2010; SEE LAST PAGEo n D e c e m b e r 2, 2010w w w .s c i e n c e m a g .o r g D o w n l o a d e d f r o mwith a single parameter controlling the interslice correspondence of communities.Important to our method is the equivalence between the modularity quality function (12)[with a resolution parameter (5)]and stability of com-munities under Laplacian dynamics (13),which we have generalized to recover the null models for bipartite,directed,and signed networks (14).First,we obtained the resolution-parameter generaliza-tion of Barber ’s null model for bipartite networks (15)by requiring the independent joint probability contribution to stability in (13)to be conditional on the type of connection necessary to step between two nodes.Second,we recovered the standard null model for directed networks (16,17)(again with a resolution parameter)by generaliz-ing the Laplacian dynamics to include motion along different kinds of connections —in this case,both with and against the direction of a link.By this generalization,we similarly recovered a null model for signed networks (18).Third,we interpreted the stability under Laplacian dynamics flexibly to permit different spreading weights on the different types of links,giving multiple reso-lution parameters to recover a general null model for signed networks (19).We applied these generalizations to derive null models for multislice networks that extend the existing quality-function methodology,including an additional parameter w to control the coupling between slices.Representing each network slice s by adjacencies A ijs between nodes i and j ,with interslice couplings C jrs that connect node j in slice r to itself in slice s (Fig.1),we have restricted our attention to unipartite,undirected network slices (A ijs =A jis )and couplings (C jrs =C jsr ),but we can incorporate additional structure in the slices and couplings in the same manner as demonstrated for single-slice null models.Notating the strengths of each node individually in each slice by k js =∑i A ijs and across slices by c js =∑r C jsr ,we define the multislice strength by k js =k js +c js .The continuous-time Laplacian dynamics given byp˙is ¼∑jr ðA ijs d sr þd ij C jsr Þp jrk jr−p isð1Þrespects the intraslice nature of A ijs and the interslice couplings of C jsr .Using the steady-state probability distribution p ∗jr ¼k jr =2m ,where 2m =∑jr k jr ,we obtained the multislice null model in terms of the probability r is |jr of sampling node i in slice s conditional on whether the multislice struc-ture allows one to step from (j ,r )to (i ,s ),accounting for intra-and interslice steps separately asr is j jr p ∗jr ¼k is2m s k jr k jr d sr þC jsr c jr c jr k jr d ijk jr 2m ð2Þwhere m s =∑j k js .The second term in parentheses,which describes the conditional probability of motion between two slices,leverages the definition of the C jsr coupling.That is,the conditional probability of stepping from (j ,r )to (i ,s )along an interslice coupling is nonzero if and only if i =j ,and it is proportional to the probability C jsr /k jr of selecting the precise interslice link that connects to slice s .Subtracting this conditional joint probability from the linear (in time)approximation of the exponential describing the Laplacian dynamics,we obtained a multislice generalization of modularity (14):Q multislice ¼12m ∑ijsrhA ijs −g sk is k js 2m s d sr þd ij C jsr id ðg is ,g jr Þð3Þwhere we have used reweighting of the conditionalprobabilities,which allows a different resolution g s in each slice.We have absorbed the resolution pa-rameter for the interslice couplings into the mag-nitude of the elements of C jsr ,which,for simplicity,we presume to take binary values {0,w }indicating the absence (0)or presence (w )of interslice links.YearS e n a t o rCTMARI DENYIL IN MIWI IA KSMONDVA AL ARFL GALA MSSC KYOK WVCOID MTNMWYORAK HI Congress #ABFig.3.Multislice community detection of U.S.Senate roll call vote similarities (23)with w =0.5coupling of 110slices (i.e.,the number of 2-year Congresses from 1789to 2008)across time.(A )Colors indicate assignments to nine communities of the 1884unique senators (sorted vertically and connected across Congresses by dashed lines)in each Congress in which they appear.The dark blue and red communities correspond closely to the modern Democratic and Republican parties,respectively.Horizontal bars indicate the historical period of each community,with accompanying text enumerating nominal party affiliations of the single-slice nodes (each representing a senator in a Congress):PA,pro-administration;AA,anti-administration;F,Federalist;DR,Democratic-Republican;W,Whig;AJ,anti-Jackson;A,Adams;J,Jackson;D,Democratic;R,Republican.Vertical gray bars indicate Congresses in which three communities appeared simultaneously.(B )The same assignments according to state affiliations.SCIENCEVOL 32814MAY 2010877REPORTSo n D e c e m b e r 2, 2010w w w .s c i e n c e m a g .o r g D o w n l o a d e d f r o mCommunity detection in multislice networks can then proceed using many of the same com-putational heuristics that are currently available for single-slice networks [although,as with the stan-dard definition of modularity,one must be cautious about the resolution of communities (20)and the likelihood of complex quality landscapes that necessitate caution in interpreting results on real networks (21)].We studied examples that have multiple resolutions [Zachary Karate Club (22)],vary over time [voting similarities in the U.S.Senate (23)],or are multiplex [the “Tastes,Ties,and Time ”cohort of university students (24)].We provide additional details for each example in (14).We performed simultaneous community de-tection across multiple resolutions (scales)in the well-known Zachary Karate Club network,which encodes the friendships between 34members of a 1970s university karate club (22).Keeping the same unweighted adjacency matrix across slices (A ijs =A ij for all s ),the resolution associated with each slice is dictated by a specified sequence of g s parameters,which we chose to be the 16values g s ={0.25,0.5,0.75,…,4}.In Fig.2,we depict the community assignments obtained for cou-pling strengths w ={0,0.1,1}between each neighboring pair of the 16ordered slices.These results simultaneously probe all scales,includ-ing the partition of the Karate Club into four com-munities at the default resolution of modularity (3,25).Additionally,we identified nodes that have an especially strong tendency to break off from larger communities (e.g.,nodes 24to 29in Fig.2).We also considered roll call voting in the U.S.Senate across time,from the 1st Congress to the 110th,covering the years 1789to 2008and includ-ing 1884distinct senator IDs (26).We defined weighted connections between each pair of sen-ators by a similarity between their voting,specified independently for each 2-year Congress (23).We studied the multislice collection of these 110networks,with each individual senator coupled to himself or herself when appearing in consecutive Congresses.Multislice community detection un-covered interesting details about the continuity of individual and group voting trends over time that are not captured by the union of the 110in-dependent partitions of the separate Congresses.Figure 3depicts a partition into nine communities that we obtained using coupling w =0.5.The Congresses in which three communities appeared simultaneously are each historically noteworthy:The 4th and 5th Congresses were the first with political parties;the 10th and 11th Congresses occurred during the political drama of former Vice President Aaron Burr ’s indictment for treason;the 14th and 15th Congresses witnessed the beginning of changing group structures in the Democratic-Republican party amidst the dying Federalist party (23);the 31st Congress included the Compromise of 1850;the 37th Congress occurred during the beginning of the American Civil War;the 73rd and 74th Congresses followed the landslide 1932election (during the Great Depression);and the 85th to 88th Congresses brought the major American civil rights acts,including the congressio-nal fights over the Civil Rights Acts of 1957,1960,and 1964.Finally,we applied multislice community detection to a multiplex network of 1640college students at a northeastern American university (24),including symmetrized connections from the first wave of this data representing (i)Facebook friendships,(ii)picture friendships,(iii)roommates,and (iv)student housing-group preferences.Be-cause the different connection types are categorical,the natural interslice couplings connect an individ-ual in a slice to himself or herself in each of the other three network slices.This coupling between categorical slices thus differs from that above,which connected only neighboring (ordered)slices.Table 1indicates the numbers of communities and the percentages of individuals assigned to one,two,three,or four communities across the four types of connections for different values of w ,as a first investigation of the relative redundancy across the connection types.Our multislice framework makes it possible to study community structure in a much broader class of networks than was previously possible.Instead of detecting communities in one static network at a time,our formulation generalizing the Laplacian dynamics approach of (13)permits the simulta-neous quality-function study of community struc-ture across multiple times,multiple resolution parameter values,and multiple types of links.Weused this method to demonstrate insights in real-world networks that would have been difficult or impossible to obtain without the simultaneous consideration of multiple network slices.Although our examples included only one kind of variation at a time,our framework applies equally well to networks that have multiple such features (e.g.,time-dependent multiplex networks).We expect multislice community detection to become a powerful tool for studying such systems.References and Notes1.M.Girvan,M.E.J.Newman,Proc.Natl.Acad.Sci.U.S.A.99,7821(2002).2.M.A.Porter,J.-P.Onnela,P.J.Mucha,Not.Am.Math.Soc.56,1082(2009).3.S.Fortunato,Phys.Rep.486,75(2010).4.M.E.J.Newman,Phys.Rev.E 74,036104(2006).5.J.Reichardt,S.Bornholdt,Phys.Rev.E 74,016110(2006).6.J.Hopcroft,O.Khan,B.Kulis,B.Selman,Proc.Natl.Acad.Sci.U.S.A.101(suppl.1),5249(2004).7.T.Y.Berger-Wolf,J.Saia,in Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2006),p.523(10.1145/1150402.1150462).8.G.Palla,A.-L.Barabási,T.Vicsek,Nature 446,664(2007).9.D.J.Fenn et al .,Chaos 19,033119(2009).10.J.Sun,C.Faloutsos,S.Papadimitriou,P.S.Yu,inProceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2007),p.687(10.1145/1281192.1281266).11.T.M.Selee,T.G.Kolda,W.P.Kegelmeyer,J.D.Griffin,CSRI Summer Proceedings 2007,Technical Report SAND2007-7977,Sandia National Laboratories,Albuquerque,NM and Livermore,CA ,M.L.Parks,S.S.Collis,Eds.(2007),p.87(/CSRI/Proceedings).12.M.E.J.Newman,M.Girvan,Phys.Rev.E 69,026113(2004)mbiotte,J.C.Delvenne,M.Barahona,http://arxiv.org/abs/0812.1770(2008).14.See supporting material on Science Online.15.M.J.Barber,Phys.Rev.E 76,066102(2007).16.A.Arenas,J.Duch,A.Fernandez,S.Gomez,N.J.Phys.9,176(2007).17.E.A.Leicht,M.E.J.Newman,Phys.Rev.Lett.100,118703(2008).18.S.Gómez,P.Jensen,A.Arenas,Phys.Rev.E 80,016114(2009).19.V.A.Traag,J.Bruggeman,Phys.Rev.E 80,036115(2009).20.S.Fortunato,M.Barthélemy ,Proc.Natl.Acad.Sci.U.S.A.104,36(2007).21.B.H.Good,Y.-A.de Montjoye,A.Clauset,Phys.Rev.E81,046106(2010).22.W.W.Zachary,J.Anthropol.Res.33,452(1977).23.A.S.Waugh,L.Pei,J.H.Fowler,P.J.Mucha,M.A.Porter,/abs/0907.3509(2009).24.K.Lewis,J.Kaufman,M.Gonzalez,A.Wimmer,N.Christakis,works 30,330(2008).25.T.Richardson,P.J.Mucha,M.A.Porter,Phys.Rev.E 80,036111(2009).26.K.T.Poole,Voteview ()(2008).27.We thank N.A.Christakis,L.Meneades,and K.Lewis foraccess to and helping with the “Tastes,Ties,and Time ”data;S.Reid and A.L.Traud for help developing code;and A.Clauset,J.-C.Delvenne,S.Fortunato,M.Gould,and V.Traag for discussions.Congressional roll call data are from (26).Supported by NSF grant DMS-0645369(P.J.M.),James S.McDonnellFoundation grant 220020177(M.A.P.),and the Fulbright Program (J.-P.O.).Supporting Online Material/cgi/content/full/328/5980/876/DC1SOM Text References17November 2009;accepted 22March 201010.1126/science.1184819Table munities in the first wave of the multiplex “Tastes,Ties,and Time ”network (24),using the default resolution (g =1)in each of the four slices of data (Facebook friendships,picture friendships,roommates,and housing groups)under various couplings w across slices,which changed the number of communities and percentages of individuals assigned on a per-slice basis to one,two,three,or four communities.w Number of communitiesCommunities per individual (%)1234010360001000.112214.040.537.38.20.26619.949.125.3 5.70.34926.248.321.6 3.90.43631.847.018.4 2.80.53139.342.416.8 1.511610014MAY 2010VOL 328SCIENCE878REPORTSo n D e c e m b e r 2, 2010w w w .s c i e n c e m a g .o r g D o w n l o a d e d f r o m1 sCiEnCE erratum post date 16 july 2010 ErratumReports: “Community structure in time-dependent, multiscale, and multiplex networks” by P. J. Mucha et al . (14 May, p. 876). Equation 3 contained a typographical error that was not caught during the editing process: The δsr term should have been outside of the paren-theses within the square brackets. The correct equation, which also appears in the support-ing online material as equation 9, is as follows:See the revised supporting online material (/cgi/content/full/sci;328/5980/876/DC2), which also includes a correction to equation 11. The computations supporting the examples described in the Report were all performed with the correct for-mula for Q multislice . The authors thank Giuseppe Mangioni for pointing out the error.Post date 16 July 2010o n D e c e m b e r 2, 2010w w w .s c i e n c e m a g .o r g D o w n l o a d e d f r o mCOMMENTARY16 JULY 2010 VOL 329 SCIENCE 276LETTERSedited by Jennifer SillsLETTERS I BOOKS I POLICY FORUM I EDUCATION FORUM I PERSPECTIVESC R ED I T : ME H M E T K A R A T A Y /W I K I M E D I A C O M M O N SBrazilian Law:Full Speed in Reverse?IS IT POSSIBLE TO COMBINE MODERN TROPI-cal agriculture with environmental conserva-tion? Brazilian agriculture offers encourag-ing examples that achieve high production together with adequate environmental pro-tection (1, 2). However, these effective prac-tices may soon lose ground to the conven-tional custom of resource overexploitation and environmental degradation.A revision to the Forest Act, the main Bra-zilian environmental legislation on private land, has just been submitted to Congress, and there is a strong chance that it will be approved. The proposed revision raises seri-ous concerns in the Brazilian scientifi c com-munity, which was largely ignored during its elaboration. The new rules will benefi t sectors that depend on expanding frontiers by clear-cutting forests and savannas and will reduce mandatory restoration of native vegetation illegally cleared since 1965. If approved, CO 2 emissions may increase substantially, instead of being reduced as was recently pledged in Copenhagen. Simple species-area relation-ship analyses also pro j ect the extinction of more than 100,000 species, a massive loss that will invalidate any commitment to biodi-versity conservation. Proponents of the new law, with well-known ties to specifi c agribusi-ness groups, claim an alleged shortage of land for agricultural expansion, and accuse the current legislation of being overprotective ofFunding Should Come to Those Who WaitWE APPLAUD THE PERSPECTIVE BY T. CLUTTON-BROCK ANDB. C. Sheldon (“The Seven Ages of Pan ,” 5 March, p. 1207) on the value of long-term behavior and ecologi-cal research. We pick up where they left off: funding. Long-term research has cumulative value that far exceeds its annual rate of return. Sadly, quick empiri-cal studies trump long-term research in the reward sys-tem for academic promotion in ecology and behavior. If long-term research is to fl ourish, we must build a reward system for studies characterized by deferred gratifi ca-tion. A sea change in these values must precede attemptsto address funding.To secure the future of long-term fi eld projects, we must act on three fronts:(i) We must devise funding mechanisms for “legacy” projects deemed too valuable to falter. Whereas the National Science Foundation’s (NSF’s) National Ecological Observatory Network and Long-Term Ecological Research programs support long-term collaborative, site-based research, there is a compelling need to support the diversity of long-term investiga-tor-initiated programs. As implemented, NSF’s Long-Term Research in Environmental Biol-ogy program is a fi rst step, but has insuffi cient support to maintain many valuable projects.(ii) We must develop mechanisms to fund the establishment of new programs with long-term potential. Such potential may not be initially appreciated, but with vision and support, new systems studied over the long run will produce novel insights.(iii) Support for ecological research must be increased. We do not advocate robbing Peter (short-term research) to pay Paul (long-term research). However, we maintain that Paul has already been robbed and some balance needs to be restored.Most of us involved in long-term research have a story to share, in which time-lim-ited funding shortages took our programs to the edge of a precipice. Investigators that suc-ceed and become known for long-term research, almost by defi nition, have found a way to adapt to funding shortfalls, usually at great personal sacrifi ce. A recent case at the Los Amigos Biological Station in the Peruvian Amazon speaks to the value of funding continuity (1). During a 4-year period of programmatic support, the scientifi c productivity of the station surged, producing many valuable fi ndings and building substantial scientifi c capacity for the region. Since the funding evaporated, the station has failed to return to its former glory, at great loss to our ability to make scientifi c inroads into understanding the ecology of this area, characterized by unrivaled biodiversity.Of course, long-term programs must remain intellectually vibrant and methodologically rigorous if they are to be supported. In the end, the onus is on ecologists to convince ourselves, society, and funding agencies that long-term research has unique and irreplaceable value.RONALD R. SWAISGOOD,1* JOHN W. TERBORGH,2 DANIEL T. BLUMSTEIN 31Applied Animal Ecology, San Diego Zoo’s Institute for Conservation Research, San Diego, CA 92027, USA. 2Center for Tropi-cal Conservation, Duke University, Durham, NC 27705, USA. 3Department of Ecology and Evolutionary Biology, University of California, Los Angeles, CA 90095, USA.*To whom correspondence should be addressed. E-mail: rswaisgood@Reference1. N. C. A. Pitman, Trends Ecol. Evol . 25, 381 (2010).Long-term studies. Studies spanning decades have yielded insights into red deer and other species. Published by AAASo n D e c e m b e r 2, 2010w w w .s c i e n c e m a g .o r g D o w n l o a d e d f r o m SCIENCE VOL 329 16 JULY 2010277the environment in response to foreign inter-ests fronted by green nongovernmental orga-nizations. However, recent studies (3) show that, without further conversion of natural vegetation, crop production can be increased by converting suitable pastures to agriculture and intensifying livestock production on the remaining pasture. Brazil has a high poten-tial for achieving sustainable development and thereby conserving its unique biological heritage. Although opposed by the Ministry of the Environment and most scientists, the combination of traditional politicians, oppor-tunistic economic groups, and powerful land-owners may be hard to resist. The situation is delicate and serious. Under the new ForestAct, Brazil risks suffering its worst environ-mental setback in half a century, with criti-cal and irreversible consequences beyond itsborders.JEAN PAUL METZGER,1* THOMAS M. LEWINSOHN,2CARLOS A. JOLY,3 LUCIANO M. VERDADE,4 LUIZ ANTONIO MARTINELLI,5 RICARDO R. RODRIGUES 61Department of Ecology, Institute of Bioscience, University of São Paulo, 05508-900, São Paulo, SP, Brazil. 2Depart-ment of Animal Biology, State University of Campinas, Campinas, SP, Brazil. 3Department of Plant Biology, Biol-ogy Institute, State University of Campinas, Campinas, SP, Brazil. 4Center of Nuclear Energy in Agriculture, University of São Paulo, Piracicaba, Brazil. 5Program on Food Secu-rity and the Environment, Stanford University, Stanford, CA94305, USA. 6Department of Biological Sciences, “Luiz deQueiroz” College of Agriculture, University of São Paulo, Piracicaba, Brazil.*To whom correspondence should be addressed. E-mail: jpm@p.brReferences1. D. Nepstad et al., Science 326, 1350 (2009).2. C. R. Fonseca et al., Biol. Conserv. 142, 1209 (2009).3. G. Sparovek et al., Considerações sobre o Código Florestalbrasileiro (“Luiz de Queiroz” College of Agriculture, Uni-versity of São Paulo, Piracicaba, Brazil, 2010); p.br/lepac/codigo_fl orestal/Sparovek_etal_2010.pdf.Sponsors of Traumatic Brain Injury Project I’M DELIGHTED THAT SCIENCE TOOK THE TIMEto highlight the ongoing efforts of the Common Data Elements Project for research in psychological health and traumatic brain injury (“New guidelines a im to improve studies of traumatic brain injury,” G. Miller,News of the Week, 16 April, p. 297). The level of interagency collaboration that made the project possible is exactly the type of lea dership tha t America ns should expectfrom the federal government.As noted in the story, the project is co-sponsored by four federal agencies—threeof whom were mentioned. The other agency is the National Institute on Disability andRehabilitation Research (NIDRR) withinthe Department of Education. NIDRR hasleadership, resources, and subject matter experts without which this project would nothave been nearly as successful. Together, all four agencies will continue to develop rec-ommendations and support ongoing efforts to improve and refine the Common Data Elements.GEOFFREY MANLEYDepartment of Neurosurgery, Brain and Spinal Injury Cen-ter, University of California, San Francisco, CA 94110, USA. E-mail: manleyg@Warming, Photoperiods, and Tree PhenologyC. KÖRNER ANDD. BASLER (“PHENOLOGY under global warming,” Perspectives, 19 March, p. 1461) suggest that because of photoperiodic constraints, observed effects of temperature on spring life-cycle events cannot be extrapolated to future tempera-ture conditions.However, no study has demonstrated that photoperiod is more dominant than temper-ature when predicting leaf senescence (1), leafing, or flowering, even in beech—one of the species most sensitive to photoperiod (2, 3). On the contrary, the literature [e.g., (4, 5)] supports the idea that spring phenol-ogy is highly dependent on temperature dur-ing both the endodormancy phase (the period during which the plant remains dormant dueTECHNICAL COMMENT ABSTRACTS Comment on “Observational and Model Evidence for Positive Low-Level Cloud Feedback”Anthony J. Broccoli and Stephen A. KleinClement et al . (Reports, 24 July 2009, p. 460) provided observational evidence for systematic relationships between variations in marine low cloudiness and other climatic variables and found that most current-generation climate models were defi cient in reproducing such relationships. Our analysis of one of these models (GFDL CM2.1), using more com-plete model output, indicates better agreement with observations, suggesting that more detailed analysis of climate model simulations is necessary.Full text at /cgi/content/full/329/5989/277-aResponse to Comment on “Observational and Model Evidence for Positive Low-Level Cloud Feedback”Amy C. Clement, Robert Burgman, Joel R. NorrisBroccoli and Klein argue for additional diagnostics to better assess the simulation of cloud feedbacks in climate models. We agree, and here provide additional analysis of two climate models that reveals where model defi ciencies in cloud simulation in the Northeast Pacifi c may occur. Cloud diagnostics from the forthcoming Climate Model Intercomparison Project 5 should make such additional analyses possible for a large number of climate models.Full text at /cgi/content/full/329/5989/277-bCORRECTIONS AND CLARIFICATIONSNews of the Week: “Invisibility cloaks for visible light must remain tiny, theorists predict” by A. Cho (25 June, p. 1621). The size limit on a cloak for infrared or visible light was misstated. It is a few hundred micrometers, not a few micrometers.News Focus: “Putting light’s light touch to work as optics meets mechanics” by A. Cho (14 May, p. 812). In the third para-graph, “pitchfork” should have been “tuning fork.”Reports: “Community structure in time-dependent, multiscale, and multiplex networks” by P. J. Mucha et al . (14 May, p. 876). Equation 3 contained a typographical error that was not caught during the editing process: The δsr term should have been outside of the parentheses within the square brackets. The correct equation, which also appears in the support-ing online material as equation 9, is to the right. See the revised supporting online material (/cgi/content/full/sci;328/5980/876/DC2), which also includes a correction to equation 11. The computations supporting theexamples described in the Report were allperformed with the correct formula for Q multislice . The authors thank Giuseppe Mangioni for point-ing out the error.Published by AAASo n D e c e m b e r 2, 2010w w w .s c i e n c e m a g .o r g D o w n l o a d e d f r o m。
An Improved Heuristic Algorithm for UAV Path Planning in 3D Environment
An Improved Heuristic Algorithm for UAV Path Planning in 3D Environment Zhang Qi1, Zhenhai Shao1, Yeo Swee Ping2, Lim Meng Hiot3, Yew Kong LEONG4 1School of Communication Engineering, University of Electronic Science and Technology of China2Microwave Research Lab, National University of Singapore3Intelligent Systems Center, Nanyang Technological University4Singapore Technologye-mail:beijixing2006@,zhenhai.shao@, eleyeosp@.sg,emhlim@.sg, leongyk@Abstract—Path planning problem is one of core contents of UAV technology. This paper presents an improved heuristic algorithm to solve 3D path planning problem. In this study the path planning model is built based on digital map firstly, and then the virtual terrain is introduced to eliminate a significant amount of search space, from 3-Dimensions to 2-Dimensions. Subsequently the improved heuristic A* algorithm is applied to generate UAV trajectory. The algorithm is featured with various searching steps and weighting factor for each cost component. The simulation results have been done to validate the effectiveness of this algorithm.Keywords-unmanned aerial vehicle (UAV); path planning; virtual terrain; heuristic A* algorithmI.I NTRODUCTIONPath planning is required for an unmanned aerial vehicle (UAV) to meet the objectives specified for any military or commercial application. The general purpose of path planning is to find the optimal path from a start point to a destination point subject to the different operational constraints (trajectory length, radar exposure, collision avoidance, fuel consumption, etc) imposed on the UAV for a particular mission; if, for example, the criterion is simply to minimize flight time, the optimization process is then reduced to a minimal cost problem.Over decades several path planning algorithms have been investigated. Bortoff [1] presented a two-step path planning algorithm based on Voronoi partitioning: a graph search method is first applied to generate a rough-cut path which is thereafter smoothed in accordance with his proposed virtual-force model. Anderson et al. [2] also employed Voronoi approaches to generate a family of feasible trajectories. Pellazar [3], Nikolos et al. [4] and Lim et al. [5] opted for genetic algorithms to navigate the UAV. The calculus-of-variation technique has been adopted in [6]-[7] to find an optimal path with minimum radar illumination.In this paper, an improved heuristic algorithm is presented for UAV path planning. The path planning environment is built in section II, and the algorithm is depicted in section III, the following section presents experimental results which can validate the effectiveness of the proposed algorithm.II.P ATH PLANNING MODELSeveral factors must be taken into account in path planning problem: terrain information, threat information, and UAV kinetics. These factors form flight constraints which must be handled in planning procedure.Many studies use the mathematical function to simulate terrain environment [4]. This method is quick and simple, but compared with the real terrain which UAV flying across, it lacks of reality and universality. In this study, terrain information is constructed by DEM (digital elevation model) data, which is released by USGS (U.S. Geological Survey) as the true terrain representation.Threat information is also considered in path planning. In modern warfare, almost all anti-air weapons need radar to track and lock air target. Here the main threat is radar illumination. Radar threat density can be represented by radar equation, because the intrinsic radar parameters are determined before path planning. The threat density can be regarded inversely proportional to R4, where R is the distance from the UAV’s current location to a particular radar site.For simplicity, UAV is modeled as a mass point traveling at a constant velocity and its minimum turning radius is treated as a fixed parameter.III.P ATH PLANNING A PPRO A CHA.Virtual terrain for three-dimensional path planningUnlike ground vehicle routing planning, UAV path planning is a 3D problem in real scenario. In 3D space, not only terrain and threat information is taken into account, but also UAV specifications, such as max heading angle, vertical angle, and turning radius are incorporated for comprehensive consideration.The straightforward method for UAV path planning is partitioning 3D space as 3D grid and then some algorithms are applied to generate path. However, for any algorithm the computational time is mainly dependent on the size of search space. Therefore, for efficiency consideration, a novel concept of constructing a 2D search space which is based on original 3D search space is proposed, which is called virtual terrain. The virtual terrain is constructed above the real terrain according to the required flight safety clearance2010 Second International Conference on Intelligent Human-Machine Systems and Cyberneticsheight, as it is shown in Figure 1. . A’B’C’D’ is the real terrain and ABCD is virtual terrain. H is the clearance height between two surfaces. Virtual terrain enables path planning in 2D surface instead of 3D grid and can reduce search spaceby an order of magnitude.Figure 1. virtual terrain above real terrainB. Path planning algorithmA* algorithm [8]-[9] is a well-known graph search procedure utilizing a heuristic function to guide its search. Given a consistent admissible condition, A* search is guaranteed to yield an optimal path [8]. At the core of the algorithm is a list containing all of the current states. At each iterative step, the algorithm expands and evaluates the adjacent states of all current states and decides whether any of them should be added to the list (if not in the list) or updated (if already in the list) based on the cost function:()()()f n g n h n =+ (1)where f(n) is the total cost at the current vertex, g(n)denotes the actual cost from the start point to the current point n , and h(n) refers to the pre-estimated cost from the current point n to the destination point. For applications that entail searching on a map, the heuristic function h(n) is assigned with Euclidean distance.UAV path planning is a multi criteria search problem. The actual cost g(n) in this study is composed by three items: distance cost D(n), climb cost C(n) and threat cost T(n). So g(n) can be described as follows:()()()()g n D n C n T n =++ (2) Usually, the three components of g(n) are not treatedequally during UAV task. One or two is preferred to the others. We can achieve this by introducing a weighting factor w in (2).123()()()()g n w D n w C n w T n =++ (3) w i is weighting factor and 11mi i w ==∑. For example, ifthreat cost T(n) is for greater concern in particular task, the value of w i should be increased respectively.C. The improvement of path planning strategyVirtual terrain in part A enhanced computational efficiency by transforming 3D path planning space into 2D search plane. The further improvement can be achieved by applying a new developed strategy. The path planner expands and evaluates next waypoint in virtual terrain by this developed strategy is shown in Fig. 2, 3. This planning strategy employs various searching steps by defining a searching window which can represent the information acquired by UAV on board sensors. It enables different searching steps to meet different threat cost distribution. After searching window is set, UAV performance limits is imposed in searching window based on virtual terrain. Here the UAV performance limits include turning radius, heading and vertical angle. In Fig. 3, the point P(x, y, z) is current state, and the arrow represents current speed vector. The gray points show available states which UAV can reach innext step under the limits imposed by UAV performance.Figure 2.Searching windowFigure 3. Available searching states at P(x, y, z)IV. SIMULATIONSimulation is implemented based on section II andsection III. In this simulation, terrain data is read from USGS1 degree DEM. The DEM has 3 arc-second interval alonglongitude and latitude respectively. Also five radar threats are represented according radar equation in simulation environment. Here clearance height h is set 200 to definevirtual terrain. UAV maximal heading angle and vertical angle is 20。
SPSS词汇
SPSS词汇(中英文对照)Absolute deviation, 绝对离差Absolute number, 绝对数Absolute residuals, 绝对残差Acceleration array, 加速度立体阵Acceleration in an arbitrary direction, 任意方向上的加速度Acceleration normal, 法向加速度Acceleration space dimension, 加速度空间的维数Acceleration tangential, 切向加速度Acceleration vector, 加速度向量Acceptable hypothesis, 可接受假设Accumulation, 累积Accuracy, 准确度Actual frequency, 实际频数Adaptive estimator, 自适合估计量Addition, 相加Addition theorem, 加法定理Additivity, 可加性Adjusted rate, 调整率Adjusted value, 校正值Admissible error, 容许误差Aggregation, 聚集性Alternative hypothesis, 备择假设Among groups, 组间Amounts, 总量Analysis of correlation, 相关分析Analysis of covariance, 协方差分析Analysis of regression, 回归分析Analysis of time series, 时间序列分析Analysis of variance, 方差分析Angular transformation, 角转换ANOV A (analysis of variance), 方差分析ANOV A Models, 方差分析模型Arcing, 弧/弧旋Arcsine transformation, 反正弦变换Area under the curve, 曲线面积AREG , 评估从一个时间点到下一个时间点回归相关时的误差ARIMA, 季节和非季节性单变量模型的极大似然估计Arithmetic grid paper, 算术格纸Arithmetic mean, 算术平均数Arrhenius relation, 艾恩尼斯关系Assessing fit, 拟合的评估Associative laws, 结合律Asymmetric distribution, 非对称分布Asymptotic bias, 渐近偏倚Asymptotic efficiency, 渐近效率Asymptotic variance, 渐近方差Attributable risk, 归因危险度Attribute data, 属性资料Attribution, 属性Autocorrelation, 自相关Autocorrelation of residuals, 残差的自相关Average, 平均数Average confidence interval length, 平均置信区间长度Average growth rate, 平均增长率Bar chart, 条形图Bar graph, 条形图Base period, 基期Bayes' theorem , Bayes定理Bell-shaped curve, 钟形曲线Bernoulli distribution, 伯努力分布Best-trim estimator, 最好切尾估计量Bias, 偏性Binary logistic regression, 二元逻辑斯蒂回归Binomial distribution, 二项分布Bisquare, 双平方Bivariate Correlate, 二变量相关Bivariate normal distribution, 双变量正态分布Bivariate normal population, 双变量正态总体Biweight interval, 双权区间Biweight M-estimator, 双权M估计量Block, 区组/配伍组BMDP(Biomedical computer programs), BMDP统计软件包Boxplots, 箱线图/箱尾图Breakdown bound, 崩溃界/崩溃点Canonical correlation, 典型相关Caption, 纵标目Case-control study, 病例对照研究Categorical variable, 分类变量Catenary, 悬链线Cauchy distribution, 柯西分布Cause-and-effect relationship, 因果关系Cell, 单元Censoring, 终检Center of symmetry, 对称中心Centering and scaling, 中心化和定标Central tendency, 集中趋势Central value, 中心值CHAID -χ2 Automatic Interaction Detector, 卡方自动交互检测Chance, 机遇Chance error, 随机误差Chance variable, 随机变量Characteristic equation, 特征方程Characteristic root, 特征根Characteristic vector, 特征向量Chebshev criterion of fit, 拟合的切比雪夫准则Chernoff faces, 切尔诺夫脸谱图Chi-square test, 卡方检验/χ2检验Choleskey decomposition, 乔洛斯基分解Circle chart, 圆图Class interval, 组距Class mid-value, 组中值Class upper limit, 组上限Classified variable, 分类变量Cluster analysis, 聚类分析Cluster sampling, 整群抽样Code, 代码Coded data, 编码数据Coding, 编码Coefficient of contingency, 列联系数Coefficient of determination, 决定系数Coefficient of multiple correlation, 多重相关系数Coefficient of partial correlation, 偏相关系数Coefficient of production-moment correlation, 积差相关系数Coefficient of rank correlation, 等级相关系数Coefficient of regression, 回归系数Coefficient of skewness, 偏度系数Coefficient of variation, 变异系数Cohort study, 队列研究Column, 列Column effect, 列效应Column factor, 列因素Combination pool, 合并Combinative table, 组合表Common factor, 共性因子Common regression coefficient, 公共回归系数Common value, 共同值Common variance, 公共方差Common variation, 公共变异Communality variance, 共性方差Comparability, 可比性Comparison of bathes, 批比较Comparison value, 比较值Compartment model, 分部模型Compassion, 伸缩Complement of an event, 补事件Complete association, 完全正相关Complete dissociation, 完全不相关Complete statistics, 完备统计量Completely randomized design, 完全随机化设计Composite event, 联合事件Composite events, 复合事件Concavity, 凹性Conditional expectation, 条件期望Conditional likelihood, 条件似然Conditional probability, 条件概率Conditionally linear, 依条件线性Confidence interval, 置信区间Confidence limit, 置信限Confidence lower limit, 置信下限Confidence upper limit, 置信上限Confirmatory Factor Analysis , 验证性因子分析Confirmatory research, 证实性实验研究Confounding factor, 混杂因素Conjoint, 联合分析Consistency, 相合性Consistency check, 一致性检验Consistent asymptotically normal estimate, 相合渐近正态估计Consistent estimate, 相合估计Constrained nonlinear regression, 受约束非线性回归Constraint, 约束Contaminated distribution, 污染分布Contaminated Gausssian, 污染高斯分布Contaminated normal distribution, 污染正态分布Contamination, 污染Contamination model, 污染模型Contingency table, 列联表Contour, 边界线Contribution rate, 贡献率Control, 对照Controlled experiments, 对照实验Conventional depth, 常规深度Convolution, 卷积Corrected factor, 校正因子Corrected mean, 校正均值Correction coefficient, 校正系数Correctness, 准确性Correlation coefficient, 相关系数Correlation index, 相关指数Correspondence, 对应Counting, 计数Counts, 计数/频数Covariance, 协方差Covariant, 共变Cox Regression, Cox回归Criteria for fitting, 拟合准则Criteria of least squares, 最小二乘准则Critical ratio, 临界比Critical region, 拒绝域Critical value, 临界值Cross-over design, 交叉设计Cross-section analysis, 横断面分析Cross-section survey, 横断面调查Crosstabs , 交叉表Cross-tabulation table, 复合表Cube root, 立方根Cumulative distribution function, 分布函数Cumulative probability, 累计概率Curvature, 曲率/弯曲Curvature, 曲率Curve fit , 曲线拟和Curve fitting, 曲线拟合Curvilinear regression, 曲线回归Curvilinear relation, 曲线关系Cut-and-try method, 尝试法Cycle, 周期Cyclist, 周期性D test, D检验Data acquisition, 资料收集Data bank, 数据库Data capacity, 数据容量Data deficiencies, 数据缺乏Data handling, 数据处理Data manipulation, 数据处理Data processing, 数据处理Data reduction, 数据缩减Data set, 数据集Data sources, 数据来源Data transformation, 数据变换Data validity, 数据有效性Data-in, 数据输入Data-out, 数据输出Dead time, 停滞期Degree of freedom, 自由度Degree of precision, 精密度Degree of reliability, 可靠性程度Degression, 递减Density function, 密度函数Density of data points, 数据点的密度Dependent variable, 应变量/依变量/因变量Dependent variable, 因变量Depth, 深度Derivative matrix, 导数矩阵Derivative-free methods, 无导数方法Design, 设计Determinacy, 确定性Determinant, 行列式Determinant, 决定因素Deviation, 离差Deviation from average, 离均差Diagnostic plot, 诊断图Dichotomous variable, 二分变量Differential equation, 微分方程Direct standardization, 直接标准化法Discrete variable, 离散型变量DISCRIMINANT, 判断Discriminant analysis, 判别分析Discriminant coefficient, 判别系数Discriminant function, 判别值Dispersion, 散布/分散度Disproportional, 不成比例的Disproportionate sub-class numbers, 不成比例次级组含量Distribution free, 分布无关性/免分布Distribution shape, 分布形状Distribution-free method, 任意分布法Distributive laws, 分配律Disturbance, 随机扰动项Dose response curve, 剂量反应曲线Double blind method, 双盲法Double blind trial, 双盲试验Double exponential distribution, 双指数分布Double logarithmic, 双对数Downward rank, 降秩Dual-space plot, 对偶空间图DUD, 无导数方法Duncan's new multiple range method, 新复极差法/Duncan新法Effect, 实验效应Eigenvalue, 特征值Eigenvector, 特征向量Ellipse, 椭圆Empirical distribution, 经验分布Empirical probability, 经验概率单位Enumeration data, 计数资料Equal sun-class number, 相等次级组含量Equally likely, 等可能Equivariance, 同变性Error, 误差/错误Error of estimate, 估计误差Error type I, 第一类错误Error type II, 第二类错误Estimand, 被估量Estimated error mean squares, 估计误差均方Estimated error sum of squares, 估计误差平方和Euclidean distance, 欧式距离Event, 事件Event, 事件Exceptional data point, 异常数据点Expectation plane, 期望平面Expectation surface, 期望曲面Expected values, 期望值Experiment, 实验Experimental sampling, 试验抽样Experimental unit, 试验单位Explanatory variable, 说明变量Exploratory data analysis, 探索性数据分析Explore Summarize, 探索-摘要Exponential curve, 指数曲线Exponential growth, 指数式增长EXSMOOTH, 指数平滑方法Extended fit, 扩充拟合Extra parameter, 附加参数Extrapolation, 外推法Extreme observation, 末端观测值Extremes, 极端值/极值F distribution, F分布F test, F检验Factor, 因素/因子Factor analysis, 因子分析Factor Analysis, 因子分析Factor score, 因子得分Factorial, 阶乘Factorial design, 析因试验设计False negative, 假阴性False negative error, 假阴性错误Family of distributions, 分布族Family of estimators, 估计量族Fanning, 扇面Fatality rate, 病死率Field investigation, 现场调查Field survey, 现场调查Finite population, 有限总体Finite-sample, 有限样本First derivative, 一阶导数First principal component, 第一主成分First quartile, 第一四分位数Fisher information, 费雪信息量Fitted value, 拟合值Fitting a curve, 曲线拟合Fixed base, 定基Fluctuation, 随机起伏Forecast, 预测Four fold table, 四格表Fourth, 四分点Fraction blow, 左侧比率Fractional error, 相对误差Frequency, 频率Frequency polygon, 频数多边图Frontier point, 界限点Function relationship, 泛函关系Gamma distribution, 伽玛分布Gauss increment, 高斯增量Gaussian distribution, 高斯分布/正态分布Gauss-Newton increment, 高斯-牛顿增量General census, 全面普查GENLOG (Generalized liner models), 广义线性模型Geometric mean, 几何平均数Gini's mean difference, 基尼均差GLM (General liner models), 一般线性模型Goodness of fit, 拟和优度/配合度Gradient of determinant, 行列式的梯度Graeco-Latin square, 希腊拉丁方Grand mean, 总均值Gross errors, 重大错误Gross-error sensitivity, 大错敏感度Group averages, 分组平均Grouped data, 分组资料Guessed mean, 假定平均数Half-life, 半衰期Hampel M-estimators, 汉佩尔M估计量Happenstance, 偶然事件Harmonic mean, 调和均数Hazard function, 风险均数Hazard rate, 风险率Heading, 标目Heavy-tailed distribution, 重尾分布Hessian array, 海森立体阵Heterogeneity, 不同质Heterogeneity of variance, 方差不齐Hierarchical classification, 组内分组Hierarchical clustering method, 系统聚类法High-leverage point, 高杠杆率点HILOGLINEAR, 多维列联表的层次对数线性模型Hinge, 折叶点Histogram, 直方图Historical cohort study, 历史性队列研究Holes, 空洞HOMALS, 多重响应分析Homogeneity of variance, 方差齐性Homogeneity test, 齐性检验Huber M-estimators, 休伯M估计量Hyperbola, 双曲线Hypothesis testing, 假设检验Hypothetical universe, 假设总体Impossible event, 不可能事件Independence, 独立性Independent variable, 自变量Index, 指标/指数Indirect standardization, 间接标准化法Individual, 个体Inference band, 推断带Infinite population, 无限总体Infinitely great, 无穷大Infinitely small, 无穷小Influence curve, 影响曲线Information capacity, 信息容量Initial condition, 初始条件Initial estimate, 初始估计值Initial level, 最初水平Interaction, 交互作用Interaction terms, 交互作用项Intercept, 截距Interpolation, 内插法Interquartile range, 四分位距Interval estimation, 区间估计Intervals of equal probability, 等概率区间Intrinsic curvature, 固有曲率Invariance, 不变性Inverse matrix, 逆矩阵Inverse probability, 逆概率Inverse sine transformation, 反正弦变换Iteration, 迭代Jacobian determinant, 雅可比行列式Joint distribution function, 分布函数Joint probability, 联合概率Joint probability distribution, 联合概率分布K means method, 逐步聚类法Kaplan-Meier, 评估事件的时间长度Kaplan-Merier chart, Kaplan-Merier图Kendall's rank correlation, Kendall等级相关Kinetic, 动力学Kolmogorov-Smirnove test, 柯尔莫哥洛夫-斯米尔诺夫检验Kruskal and Wallis test, Kruskal及Wallis检验/多样本的秩和检验/H检验Kurtosis, 峰度Lack of fit, 失拟Ladder of powers, 幂阶梯Lag, 滞后Large sample, 大样本Large sample test, 大样本检验Latin square, 拉丁方Latin square design, 拉丁方设计Leakage, 泄漏Least favorable configuration, 最不利构形Least favorable distribution, 最不利分布Least significant difference, 最小显著差法Least square method, 最小二乘法Least-absolute-residuals estimates, 最小绝对残差估计Least-absolute-residuals fit, 最小绝对残差拟合Least-absolute-residuals line, 最小绝对残差线Legend, 图例L-estimator, L估计量L-estimator of location, 位置L估计量L-estimator of scale, 尺度L估计量Level, 水平Life expectance, 预期期望寿命Life table, 寿命表Life table method, 生命表法Light-tailed distribution, 轻尾分布Likelihood function, 似然函数Likelihood ratio, 似然比line graph, 线图Linear correlation, 直线相关Linear equation, 线性方程Linear programming, 线性规划Linear regression, 直线回归Linear Regression, 线性回归Linear trend, 线性趋势Loading, 载荷Location and scale equivariance, 位置尺度同变性Location equivariance, 位置同变性Location invariance, 位置不变性Location scale family, 位置尺度族Log rank test, 时序检验Logarithmic curve, 对数曲线Logarithmic normal distribution, 对数正态分布Logarithmic scale, 对数尺度Logarithmic transformation, 对数变换Logic check, 逻辑检查Logistic distribution, 逻辑斯特分布Logit transformation, Logit转换LOGLINEAR, 多维列联表通用模型Lognormal distribution, 对数正态分布Lost function, 损失函数Low correlation, 低度相关Lower limit, 下限Lowest-attained variance, 最小可达方差LSD, 最小显著差法的简称Lurking variable, 潜在变量Main effect, 主效应Major heading, 主辞标目Marginal density function, 边缘密度函数Marginal probability, 边缘概率Marginal probability distribution, 边缘概率分布Matched data, 配对资料Matched distribution, 匹配过分布Matching of distribution, 分布的匹配Matching of transformation, 变换的匹配Mathematical expectation, 数学期望Mathematical model, 数学模型Maximum L-estimator, 极大极小L 估计量Maximum likelihood method, 最大似然法Mean, 均数Mean squares between groups, 组间均方Mean squares within group, 组内均方Means (Compare means), 均值-均值比较Median, 中位数Median effective dose, 半数效量Median lethal dose, 半数致死量Median polish, 中位数平滑Median test, 中位数检验Minimal sufficient statistic, 最小充分统计量Minimum distance estimation, 最小距离估计Minimum effective dose, 最小有效量Minimum lethal dose, 最小致死量Minimum variance estimator, 最小方差估计量MINITAB, 统计软件包Minor heading, 宾词标目Missing data, 缺失值Model specification, 模型的确定Modeling Statistics , 模型统计Models for outliers, 离群值模型Modifying the model, 模型的修正Modulus of continuity, 连续性模Morbidity, 发病率Most favorable configuration, 最有利构形Multidimensional Scaling (ASCAL), 多维尺度/多维标度Multinomial Logistic Regression , 多项逻辑斯蒂回归Multiple comparison, 多重比较Multiple correlation , 复相关Multiple covariance, 多元协方差Multiple linear regression, 多元线性回归Multiple response , 多重选项Multiple solutions, 多解Multiplication theorem, 乘法定理Multiresponse, 多元响应Multi-stage sampling, 多阶段抽样Multivariate T distribution, 多元T分布Mutual exclusive, 互不相容Mutual independence, 互相独立Natural boundary, 自然边界Natural dead, 自然死亡Natural zero, 自然零Negative correlation, 负相关Negative linear correlation, 负线性相关Negatively skewed, 负偏Newman-Keuls method, q检验NK method, q检验No statistical significance, 无统计意义Nominal variable, 名义变量Nonconstancy of variability, 变异的非定常性Nonlinear regression, 非线性相关Nonparametric statistics, 非参数统计Nonparametric test, 非参数检验Nonparametric tests, 非参数检验Normal deviate, 正态离差Normal distribution, 正态分布Normal equation, 正规方程组Normal ranges, 正常范围Normal value, 正常值Nuisance parameter, 多余参数/讨厌参数Null hypothesis, 无效假设Numerical variable, 数值变量Objective function, 目标函数Observation unit, 观察单位Observed value, 观察值One sided test, 单侧检验One-way analysis of variance, 单因素方差分析Oneway ANOV A , 单因素方差分析Open sequential trial, 开放型序贯设计Optrim, 优切尾Optrim efficiency, 优切尾效率Order statistics, 顺序统计量Ordered categories, 有序分类Ordinal logistic regression , 序数逻辑斯蒂回归Ordinal variable, 有序变量Orthogonal basis, 正交基Orthogonal design, 正交试验设计Orthogonality conditions, 正交条件ORTHOPLAN, 正交设计Outlier cutoffs, 离群值截断点Outliers, 极端值OVERALS , 多组变量的非线性正规相关Overshoot, 迭代过度Paired design, 配对设计Paired sample, 配对样本Pairwise slopes, 成对斜率Parabola, 抛物线Parallel tests, 平行试验Parameter, 参数Parametric statistics, 参数统计Parametric test, 参数检验Partial correlation, 偏相关Partial regression, 偏回归Partial sorting, 偏排序Partials residuals, 偏残差Pattern, 模式Pearson curves, 皮尔逊曲线Peeling, 退层Percent bar graph, 百分条形图Percentage, 百分比Percentile, 百分位数Percentile curves, 百分位曲线Periodicity, 周期性Permutation, 排列P-estimator, P估计量Pie graph, 饼图Pitman estimator, 皮特曼估计量Pivot, 枢轴量Planar, 平坦Planar assumption, 平面的假设PLANCARDS, 生成试验的计划卡Point estimation, 点估计Poisson distribution, 泊松分布Polishing, 平滑Polled standard deviation, 合并标准差Polled variance, 合并方差Polygon, 多边图Polynomial, 多项式Polynomial curve, 多项式曲线Population, 总体Population attributable risk, 人群归因危险度Positive correlation, 正相关Positively skewed, 正偏Posterior distribution, 后验分布Power of a test, 检验效能Precision, 精密度Predicted value, 预测值Preliminary analysis, 预备性分析Principal component analysis, 主成分分析Prior distribution, 先验分布Prior probability, 先验概率Probabilistic model, 概率模型probability, 概率Probability density, 概率密度Product moment, 乘积矩/协方差Profile trace, 截面迹图Proportion, 比/构成比Proportion allocation in stratified random sampling, 按比例分层随机抽样Proportionate, 成比例Proportionate sub-class numbers, 成比例次级组含量Prospective study, 前瞻性调查Proximities, 亲近性Pseudo F test, 近似F检验Pseudo model, 近似模型Pseudosigma, 伪标准差Purposive sampling, 有目的抽样QR decomposition, QR分解Quadratic approximation, 二次近似Qualitative classification, 属性分类Qualitative method, 定性方法Quantile-quantile plot, 分位数-分位数图/Q-Q图Quantitative analysis, 定量分析Quartile, 四分位数Quick Cluster, 快速聚类Radix sort, 基数排序Random allocation, 随机化分组Random blocks design, 随机区组设计Random event, 随机事件Randomization, 随机化Range, 极差/全距Rank correlation, 等级相关Rank sum test, 秩和检验Rank test, 秩检验Ranked data, 等级资料Rate, 比率Ratio, 比例Raw data, 原始资料Raw residual, 原始残差Rayleigh's test, 雷氏检验Rayleigh's Z, 雷氏Z值Reciprocal, 倒数Reciprocal transformation, 倒数变换Recording, 记录Redescending estimators, 回降估计量Reducing dimensions, 降维Re-expression, 重新表达Reference set, 标准组Region of acceptance, 接受域Regression coefficient, 回归系数Regression sum of square, 回归平方和Rejection point, 拒绝点Relative dispersion, 相对离散度Relative number, 相对数Reliability, 可靠性Reparametrization, 重新设置参数Replication, 重复Report Summaries, 报告摘要Residual sum of square, 剩余平方和Resistance, 耐抗性Resistant line, 耐抗线Resistant technique, 耐抗技术R-estimator of location, 位置R估计量R-estimator of scale, 尺度R估计量Retrospective study, 回顾性调查Ridge trace, 岭迹Ridit analysis, Ridit分析Rotation, 旋转Rounding, 舍入Row, 行Row effects, 行效应Row factor, 行因素RXC table, RXC表Sample, 样本Sample regression coefficient, 样本回归系数Sample size, 样本量Sample standard deviation, 样本标准差Sampling error, 抽样误差SAS(Statistical analysis system ), SAS统计软件包Scale, 尺度/量表Scatter diagram, 散点图Schematic plot, 示意图/简图Score test, 计分检验Screening, 筛检SEASON, 季节分析Second derivative, 二阶导数Second principal component, 第二主成分SEM (Structural equation modeling), 结构化方程模型Semi-logarithmic graph, 半对数图Semi-logarithmic paper, 半对数格纸Sensitivity curve, 敏感度曲线Sequential analysis, 贯序分析Sequential data set, 顺序数据集Sequential design, 贯序设计Sequential method, 贯序法Sequential test, 贯序检验法Serial tests, 系列试验Short-cut method, 简捷法Sigmoid curve, S形曲线Sign function, 正负号函数Sign test, 符号检验Signed rank, 符号秩Significance test, 显著性检验Significant figure, 有效数字Simple cluster sampling, 简单整群抽样Simple correlation, 简单相关Simple random sampling, 简单随机抽样Simple regression, 简单回归simple table, 简单表Sine estimator, 正弦估计量Single-valued estimate, 单值估计Singular matrix, 奇异矩阵Skewed distribution, 偏斜分布Skewness, 偏度Slash distribution, 斜线分布Slope, 斜率Smirnov test, 斯米尔诺夫检验Source of variation, 变异来源Spearman rank correlation, 斯皮尔曼等级相关Specific factor, 特殊因子Specific factor variance, 特殊因子方差Spectra , 频谱Spherical distribution, 球型正态分布Spread, 展布SPSS(Statistical package for the social science), SPSS统计软件包Spurious correlation, 假性相关Square root transformation, 平方根变换Stabilizing variance, 稳定方差Standard deviation, 标准差Standard error, 标准误Standard error of difference, 差别的标准误Standard error of estimate, 标准估计误差Standard error of rate, 率的标准误Standard normal distribution, 标准正态分布Standardization, 标准化Starting value, 起始值Statistic, 统计量Statistical control, 统计控制Statistical graph, 统计图Statistical inference, 统计推断Statistical table, 统计表Steepest descent, 最速下降法Stem and leaf display, 茎叶图Step factor, 步长因子Stepwise regression, 逐步回归Storage, 存Strata, 层(复数)Stratified sampling, 分层抽样Stratified sampling, 分层抽样Strength, 强度Stringency, 严密性Structural relationship, 结构关系Studentized residual, 学生化残差/t化残差Sub-class numbers, 次级组含量Subdividing, 分割Sufficient statistic, 充分统计量Sum of products, 积和Sum of squares, 离差平方和Sum of squares about regression, 回归平方和Sum of squares between groups, 组间平方和Sum of squares of partial regression, 偏回归平方和Sure event, 必然事件Survey, 调查Survival, 生存分析Survival rate, 生存率Suspended root gram, 悬吊根图Symmetry, 对称Systematic error, 系统误差Systematic sampling, 系统抽样Tags, 标签Tail area, 尾部面积Tail length, 尾长Tail weight, 尾重Tangent line, 切线Target distribution, 目标分布Taylor series, 泰勒级数Tendency of dispersion, 离散趋势Testing of hypotheses, 假设检验Theoretical frequency, 理论频数Time series, 时间序列Tolerance interval, 容忍区间Tolerance lower limit, 容忍下限Tolerance upper limit, 容忍上限Torsion, 扰率Total sum of square, 总平方和Total variation, 总变异Transformation, 转换Treatment, 处理Trend, 趋势Trend of percentage, 百分比趋势Trial, 试验Trial and error method, 试错法Tuning constant, 细调常数Two sided test, 双向检验Two-stage least squares, 二阶最小平方Two-stage sampling, 二阶段抽样Two-tailed test, 双侧检验Two-way analysis of variance, 双因素方差分析Two-way table, 双向表Type I error, 一类错误/α错误Type II error, 二类错误/β错误UMVU, 方差一致最小无偏估计简称Unbiased estimate, 无偏估计Unconstrained nonlinear regression , 无约束非线性回归Unequal subclass number, 不等次级组含量Ungrouped data, 不分组资料Uniform coordinate, 均匀坐标Uniform distribution, 均匀分布Uniformly minimum variance unbiased estimate, 方差一致最小无偏估计Unit, 单元Unordered categories, 无序分类Upper limit, 上限Upward rank, 升秩Vague concept, 模糊概念Validity, 有效性V ARCOMP (Variance component estimation), 方差元素估计Variability, 变异性Variable, 变量Variance, 方差Variation, 变异Varimax orthogonal rotation, 方差最大正交旋转V olume of distribution, 容积W test, W检验Weibull distribution, 威布尔分布Weight, 权数Weighted Chi-square test, 加权卡方检验/Cochran检验Weighted linear regression method, 加权直线回归Weighted mean, 加权平均数Weighted mean square, 加权平均方差Weighted sum of square, 加权平方和Weighting coefficient, 权重系数Weighting method, 加权法W-estimation, W估计量W-estimation of location, 位置W估计量Width, 宽度Wilcoxon paired test, 威斯康星配对法/配对符号秩和检验Wild point, 野点/狂点Wild value, 野值/狂值Winsorized mean, 缩尾均值Withdraw, 失访Youden's index, 尤登指数Z test, Z检验Zero correlation, 零相关Z-transformation, Z变换。
DTMCPack 包说明说明书
Package‘DTMCPack’October12,2022Type PackageTitle Suite of Functions Related to Discrete-Time Discrete-StateMarkov ChainsVersion0.1-3Date2022-04-10Author William NicholsonMaintainer William Nicholson<*********************>Description A series of functions which aid in both simulating and determining the properties offi-nite,discrete-time,discrete state markov chains.Two functions(DTMC,MultDTMC)pro-duce n iterations of a Markov Chain(s)based on transition probabilities and an initial distribu-tion.The function FPTime determines thefirst passage time into each state.The function stat-distr determines the stationary distribution of a Markov Chain.Imports statsLicense GPL(>=2)LazyLoad yesNeedsCompilation noRepository CRANDate/Publication2022-04-1102:12:30UTCR topics documented:DTMCPack-package (2)DTMC (3)FPTime (4)gr (5)hh (5)id (6)MultDTMC (6)statdistr (7)Index812DTMCPack-packageDTMCPack-package Suite of functions related to discrete-time discrete-state MarkovChainsDescriptionA series of functions which aid in both simulating and determining the properties offinite,discrete-time,discrete state markov chains.This package may be of use to practioners who need to simulate Markov Chains,but its primary intended audience is students of an introductory stochastic processes studying class properties and long run behavior patterns of Markov Chains.Two functions(DTMC, MultDTMC)produce n iterations of a Markov Chain(s)based on transition probabilities and an initial distribution.The function FPTime determines thefirst passage time into each state.The function statdistr determines the stationary distribution of a Markov Chain.Updated4/10/22to maintain compatibility with R.DetailsPackage:DTMCPackType:PackageVersion:0.1-2Date:2013-05-22License:GPL(>=2)LazyLoad:yesAuthor(s)Will NicholsonMaintainer:<****************>ReferencesSidney Resnick,"Adventures in Stochastic Processes"Examplesdata(gr)data(id)DTMC(gr,id,10,trace=FALSE)DTMC3 DTMC Simulation of Discrete-Time/State Markov ChainDescriptionThis function simulates iterations through a discrete time Markov Chain.A Markov Chain is a discrete Markov Process with a state space that usually consists of positive integers.The advantage of a Markov process in a stochastic modeling context is that conditional dependencies over time are manageable because the probabilistic future of the process depends only on the present state,not the past.Therefore,if we specify an initial distribution as well as a transition matrix,we can simulate many periods into the future without any further information.Future transition probabilities can be computed by raising the transition matrix to higher-and higher powers,but this method is not numerically tractable for large matrices.My method uses a uniform random variable to iterate a user-specified number of iterations of a Markov Chain based on the transition probabilities and the initital distribution.A graphical output is also available in the form of a trace plot.UsageDTMC(tmat,io,N,trace)Argumentstmat Transition matrix-rows must sum to1and the number of rows and columns must be equal.io Initial observation,1column,must sum to1,must be the same length as transi-tion matrix.N Number of simulations.trace Optional trace plot,specify as TRUE or FALSE.ValueTrace Trace-plot of the iterations through states(if selected)State An n x nrow(tmat)matrix detailing the iterations through each state of the Markov ChainAuthor(s)Will NicholsonReferences"Adventures in Stochastic Processes"by Sidney ResnickSee AlsoMultDTMC4FPTimeExamplesdata(gr)data(id)DTMC(gr,id,10,trace=TRUE)#10iterations through"Gambler s ruin"FPTime First Passage TimeDescriptionThis function uses the companion function multDTMC to simulate several Markov chains to deter-mine thefirst passage time into each state,i.e.thefirst time(after the initial iteration)that a specified state is reached in the Markov Process.First Passage Time can be useful for both determining class properties as well as the stationary/invariant distribution for large Markov Chains in which explicit matrix inversion is not computationally tractable.UsageFPTime(state,nchains,tmat,io,n)Argumentsstate State in which you want tofind thefirst passage time.nchains Number of chains you wish to simulate.tmat Transition Matrix,must be a square matrix,rows must sum to1.io Initial Distributionn Number of iterations to run for each Markov Chain.Valuefp1Vector of length(nchains)which givesfirst passage time into the specified state for each Markov Chain.Author(s)Will NicholsonSee AlsoDTMCExamplesdata(gr)data(id)FPTime(1,10,gr,id,10)#First passage time into first state on Gambler s ruingr5 gr Example Data Set:Gambler’s ruin on4statesDescriptionMotivating example,random walk with absorbing boundaries on4states.Analogous to a gambler ata casino.The4states represent a range of wealth.States1and4are absorbing with state1="Broke",state4="wealthy enough to walk away"and the intermediate states2and3are transitory.It is assumed that he bets of all his winnings in the intermediate states and has equal probability of winning and losingExamplesdata(gr)data(id)DTMC(gr,id,10,trace=FALSE)hh Harry the SemiProDescriptionExample Markov Chain from page139of Resnick.The protagonist,basketball player"Happy Harry’s"productivityfluctuates between three states(0-1points),(2-5points),(5or more points) and the transition between states can be modeled using a Markov ed as a motivating example to calculate the long run proportion of time spent in each state using the statdist function.SourceSidney Resnick"Adventures in Stochastic Processes"Examplesdata(hh)statdistr(hh)6MultDTMC id Initial distributionDescriptionA starting distribution for the gambler’s ruin example,which assigns equal probability of startingin each state.Examplesdata(id)data(gr)DTMC(gr,id,10,trace=FALSE)MultDTMC Multiple Discrete time Markov ChainsDescriptionAn extension of the DTMC package which enables multiple cocurrent Markov Chain simulations.At this time,plotting is not enabled.UsageMultDTMC(nchains,tmat,io,n)Argumentsnchains Number of chains to simulate(integer).tmat Transition Matrixio Initial distributionn Number of iterations to run each chain.Valuechains Returns nchains matrices of length nrow(tmat)by n which depict the transition of the Markov Chain.Author(s)Will NicholsonSee AlsoDTMCstatdistr7Examplesdata(gr)data(id)MultDTMC(20,gr,id,10)#20chains with10iterations using the Gambler s ruin example. statdistr Computing Stationary DistributionDescriptionThis function computes the stationary distribution of a markov chain(assuming one exists)using the formula from proposition2.14.1of Resnick:pi=(1,...1)(I-P+ONE)^(-1),where I is an mxm identity matrix,P is an mxm transition matrix,and ONE is an mxm matrix whose entries are all1.This formula works well if the number of states is small,but since it directly computes the inverse of the matrix,it is not tractable for larger matrices.For larger matrices1/E(FPTime(n))is a rough approximation for the long run proportion of time spent in a state n.Usagestatdistr(tmat)Argumentstmat Markov chain transition matrix,must be a square matrix and rows must sum to 1.ValueReturns a stationary distribution:mxm matrix which represents the long run percentage of time spent in each state.Author(s)Will NicholsonReferencesResnick,"Adventures in Stochastic Processes"Examplesdata(hh)statdistr(hh)Index∗Markov ChainsDTMCPack-package,2∗datasetsgr,5hh,5id,6DTMC,3,4,6DTMCPack(DTMCPack-package),2 DTMCPack-package,2FPTime,4gr,5hh,5id,6MultDTMC,3,6statdistr,78。
计量经济学中英文词汇对照
Controlled experiments Conventional depth Convolution Corrected factor Corrected mean Correction coefficient Correctness Correlation coefficient Correlation index Correspondence Counting Counts Covaห้องสมุดไป่ตู้iance Covariant Cox Regression Criteria for fitting Criteria of least squares Critical ratio Critical region Critical value
Asymmetric distribution Asymptotic bias Asymptotic efficiency Asymptotic variance Attributable risk Attribute data Attribution Autocorrelation Autocorrelation of residuals Average Average confidence interval length Average growth rate BBB Bar chart Bar graph Base period Bayes' theorem Bell-shaped curve Bernoulli distribution Best-trim estimator Bias Binary logistic regression Binomial distribution Bisquare Bivariate Correlate Bivariate normal distribution Bivariate normal population Biweight interval Biweight M-estimator Block BMDP(Biomedical computer programs) Boxplots Breakdown bound CCC Canonical correlation Caption Case-control study Categorical variable Catenary Cauchy distribution Cause-and-effect relationship Cell Censoring
数学专业词汇及翻译
一、字母顺序表 (1)二、常用的数学英语表述 (7)三、代数英语(高端) (13)一、字母顺序表1、数学专业词汇Aabsolute value 绝对值 accept 接受 acceptable region 接受域additivity 可加性 adjusted 调整的 alternative hypothesis 对立假设analysis 分析 analysis of covariance 协方差分析 analysis of variance 方差分析 arithmetic mean 算术平均值 association 相关性 assumption 假设 assumption checking 假设检验availability 有效度average 均值Bbalanced 平衡的 band 带宽 bar chart 条形图beta-distribution 贝塔分布 between groups 组间的 bias 偏倚 binomial distribution 二项分布 binomial test 二项检验Ccalculate 计算 case 个案 category 类别 center of gravity 重心 central tendency 中心趋势 chi-square distribution 卡方分布 chi-square test 卡方检验 classify 分类cluster analysis 聚类分析 coefficient 系数 coefficient of correlation 相关系数collinearity 共线性 column 列 compare 比较 comparison 对照 components 构成,分量compound 复合的 confidence interval 置信区间 consistency 一致性 constant 常数continuous variable 连续变量 control charts 控制图 correlation 相关 covariance 协方差 covariance matrix 协方差矩阵 critical point 临界点critical value 临界值crosstab 列联表cubic 三次的,立方的 cubic term 三次项 cumulative distribution function 累加分布函数 curve estimation 曲线估计Ddata 数据default 默认的definition 定义deleted residual 剔除残差density function 密度函数dependent variable 因变量description 描述design of experiment 试验设计 deviations 差异 df.(degree of freedom) 自由度 diagnostic 诊断dimension 维discrete variable 离散变量discriminant function 判别函数discriminatory analysis 判别分析distance 距离distribution 分布D-optimal design D-优化设计Eeaqual 相等 effects of interaction 交互效应 efficiency 有效性eigenvalue 特征值equal size 等含量equation 方程error 误差estimate 估计estimation of parameters 参数估计estimations 估计量evaluate 衡量exact value 精确值expectation 期望expected value 期望值exponential 指数的exponential distributon 指数分布 extreme value 极值F factor 因素,因子 factor analysis 因子分析 factor score 因子得分 factorial designs 析因设计factorial experiment 析因试验fit 拟合fitted line 拟合线fitted value 拟合值 fixed model 固定模型 fixed variable 固定变量 fractional factorial design 部分析因设计 frequency 频数 F-test F检验 full factorial design 完全析因设计function 函数Ggamma distribution 伽玛分布 geometric mean 几何均值 group 组Hharmomic mean 调和均值 heterogeneity 不齐性histogram 直方图 homogeneity 齐性homogeneity of variance 方差齐性 hypothesis 假设 hypothesis test 假设检验Iindependence 独立 independent variable 自变量independent-samples 独立样本 index 指数 index of correlation 相关指数 interaction 交互作用 interclass correlation 组内相关 interval estimate 区间估计 intraclass correlation 组间相关 inverse 倒数的iterate 迭代Kkernal 核 Kolmogorov-Smirnov test柯尔莫哥洛夫-斯米诺夫检验 kurtosis 峰度Llarge sample problem 大样本问题 layer 层least-significant difference 最小显著差数 least-square estimation 最小二乘估计 least-square method 最小二乘法 level 水平 level of significance 显著性水平 leverage value 中心化杠杆值 life 寿命 life test 寿命试验 likelihood function 似然函数 likelihood ratio test 似然比检验linear 线性的 linear estimator 线性估计linear model 线性模型 linear regression 线性回归linear relation 线性关系linear term 线性项logarithmic 对数的logarithms 对数 logistic 逻辑的 lost function 损失函数Mmain effect 主效应 matrix 矩阵 maximum 最大值 maximum likelihood estimation 极大似然估计 mean squared deviation(MSD) 均方差 mean sum of square 均方和 measure 衡量 media 中位数 M-estimator M估计minimum 最小值 missing values 缺失值 mixed model 混合模型 mode 众数model 模型Monte Carle method 蒙特卡罗法 moving average 移动平均值multicollinearity 多元共线性multiple comparison 多重比较 multiple correlation 多重相关multiple correlation coefficient 复相关系数multiple correlation coefficient 多元相关系数 multiple regression analysis 多元回归分析multiple regression equation 多元回归方程 multiple response 多响应 multivariate analysis 多元分析Nnegative relationship 负相关 nonadditively 不可加性 nonlinear 非线性 nonlinear regression 非线性回归 noparametric tests 非参数检验 normal distribution 正态分布null hypothesis 零假设 number of cases 个案数Oone-sample 单样本 one-tailed test 单侧检验 one-way ANOVA 单向方差分析 one-way classification 单向分类 optimal 优化的optimum allocation 最优配制 order 排序order statistics 次序统计量 origin 原点orthogonal 正交的 outliers 异常值Ppaired observations 成对观测数据paired-sample 成对样本parameter 参数parameter estimation 参数估计 partial correlation 偏相关partial correlation coefficient 偏相关系数 partial regression coefficient 偏回归系数 percent 百分数percentiles 百分位数 pie chart 饼图 point estimate 点估计 poisson distribution 泊松分布polynomial curve 多项式曲线polynomial regression 多项式回归polynomials 多项式positive relationship 正相关 power 幂P-P plot P-P概率图predict 预测predicted value 预测值prediction intervals 预测区间principal component analysis 主成分分析 proability 概率 probability density function 概率密度函数 probit analysis 概率分析 proportion 比例Qqadratic 二次的 Q-Q plot Q-Q概率图 quadratic term 二次项 quality control 质量控制 quantitative 数量的,度量的 quartiles 四分位数Rrandom 随机的 random number 随机数 random number 随机数 random sampling 随机取样random seed 随机数种子 random variable 随机变量 randomization 随机化 range 极差rank 秩 rank correlation 秩相关 rank statistic 秩统计量 regression analysis 回归分析regression coefficient 回归系数regression line 回归线reject 拒绝rejection region 拒绝域 relationship 关系 reliability 可*性 repeated 重复的report 报告,报表 residual 残差 residual sum of squares 剩余平方和 response 响应risk function 风险函数 robustness 稳健性 root mean square 标准差 row 行 run 游程run test 游程检验Sample 样本 sample size 样本容量 sample space 样本空间 sampling 取样 sampling inspection 抽样检验 scatter chart 散点图 S-curve S形曲线 separately 单独地 sets 集合sign test 符号检验significance 显著性significance level 显著性水平significance testing 显著性检验 significant 显著的,有效的 significant digits 有效数字 skewed distribution 偏态分布 skewness 偏度 small sample problem 小样本问题 smooth 平滑 sort 排序 soruces of variation 方差来源 space 空间 spread 扩展square 平方 standard deviation 标准离差 standard error of mean 均值的标准误差standardization 标准化 standardize 标准化 statistic 统计量 statistical quality control 统计质量控制 std. residual 标准残差 stepwise regression analysis 逐步回归 stimulus 刺激 strong assumption 强假设 stud. deleted residual 学生化剔除残差stud. residual 学生化残差 subsamples 次级样本 sufficient statistic 充分统计量sum 和 sum of squares 平方和 summary 概括,综述Ttable 表t-distribution t分布test 检验test criterion 检验判据test for linearity 线性检验 test of goodness of fit 拟合优度检验 test of homogeneity 齐性检验 test of independence 独立性检验 test rules 检验法则 test statistics 检验统计量 testing function 检验函数 time series 时间序列 tolerance limits 容许限total 总共,和 transformation 转换 treatment 处理 trimmed mean 截尾均值 true value 真值 t-test t检验 two-tailed test 双侧检验Uunbalanced 不平衡的 unbiased estimation 无偏估计 unbiasedness 无偏性 uniform distribution 均匀分布Vvalue of estimator 估计值 variable 变量 variance 方差 variance components 方差分量 variance ratio 方差比 various 不同的 vector 向量Wweight 加权,权重 weighted average 加权平均值 within groups 组内的ZZ score Z分数2. 最优化方法词汇英汉对照表Aactive constraint 活动约束 active set method 活动集法 analytic gradient 解析梯度approximate 近似 arbitrary 强制性的 argument 变量 attainment factor 达到因子Bbandwidth 带宽 be equivalent to 等价于 best-fit 最佳拟合 bound 边界Ccoefficient 系数 complex-value 复数值 component 分量 constant 常数 constrained 有约束的constraint 约束constraint function 约束函数continuous 连续的converge 收敛 cubic polynomial interpolation method三次多项式插值法 curve-fitting 曲线拟合Ddata-fitting 数据拟合 default 默认的,默认的 define 定义 diagonal 对角的 direct search method 直接搜索法 direction of search 搜索方向 discontinuous 不连续Eeigenvalue 特征值 empty matrix 空矩阵 equality 等式 exceeded 溢出的Ffeasible 可行的 feasible solution 可行解 finite-difference 有限差分 first-order 一阶GGauss-Newton method 高斯-牛顿法 goal attainment problem 目标达到问题 gradient 梯度 gradient method 梯度法Hhandle 句柄 Hessian matrix 海色矩阵Independent variables 独立变量inequality 不等式infeasibility 不可行性infeasible 不可行的initial feasible solution 初始可行解initialize 初始化inverse 逆 invoke 激活 iteration 迭代 iteration 迭代JJacobian 雅可比矩阵LLagrange multiplier 拉格朗日乘子 large-scale 大型的 least square 最小二乘 least squares sense 最小二乘意义上的 Levenberg-Marquardt method 列文伯格-马夸尔特法line search 一维搜索 linear 线性的 linear equality constraints 线性等式约束linear programming problem 线性规划问题 local solution 局部解M medium-scale 中型的 minimize 最小化 mixed quadratic and cubic polynomialinterpolation and extrapolation method 混合二次、三次多项式内插、外插法multiobjective 多目标的Nnonlinear 非线性的 norm 范数Oobjective function 目标函数 observed data 测量数据 optimization routine 优化过程optimize 优化 optimizer 求解器 over-determined system 超定系统Pparameter 参数 partial derivatives 偏导数 polynomial interpolation method 多项式插值法Qquadratic 二次的 quadratic interpolation method 二次内插法 quadratic programming 二次规划Rreal-value 实数值 residuals 残差 robust 稳健的 robustness 稳健性,鲁棒性S scalar 标量 semi-infinitely problem 半无限问题 Sequential Quadratic Programming method 序列二次规划法 simplex search method 单纯形法 solution 解 sparse matrix 稀疏矩阵 sparsity pattern 稀疏模式 sparsity structure 稀疏结构 starting point 初始点 step length 步长 subspace trust region method 子空间置信域法 sum-of-squares 平方和 symmetric matrix 对称矩阵Ttermination message 终止信息 termination tolerance 终止容限 the exit condition 退出条件 the method of steepest descent 最速下降法 transpose 转置Uunconstrained 无约束的 under-determined system 负定系统Vvariable 变量 vector 矢量Wweighting matrix 加权矩阵3 样条词汇英汉对照表Aapproximation 逼近 array 数组 a spline in b-form/b-spline b样条 a spline of polynomial piece /ppform spline 分段多项式样条Bbivariate spline function 二元样条函数 break/breaks 断点Ccoefficient/coefficients 系数cubic interpolation 三次插值/三次内插cubic polynomial 三次多项式 cubic smoothing spline 三次平滑样条 cubic spline 三次样条cubic spline interpolation 三次样条插值/三次样条内插 curve 曲线Ddegree of freedom 自由度 dimension 维数Eend conditions 约束条件 input argument 输入参数 interpolation 插值/内插 interval取值区间Kknot/knots 节点Lleast-squares approximation 最小二乘拟合Mmultiplicity 重次 multivariate function 多元函数Ooptional argument 可选参数 order 阶次 output argument 输出参数P point/points 数据点Rrational spline 有理样条 rounding error 舍入误差(相对误差)Sscalar 标量 sequence 数列(数组) spline 样条 spline approximation 样条逼近/样条拟合spline function 样条函数 spline curve 样条曲线 spline interpolation 样条插值/样条内插 spline surface 样条曲面 smoothing spline 平滑样条Ttolerance 允许精度Uunivariate function 一元函数Vvector 向量Wweight/weights 权重4 偏微分方程数值解词汇英汉对照表Aabsolute error 绝对误差 absolute tolerance 绝对容限 adaptive mesh 适应性网格Bboundary condition 边界条件Ccontour plot 等值线图 converge 收敛 coordinate 坐标系Ddecomposed 分解的 decomposed geometry matrix 分解几何矩阵 diagonal matrix 对角矩阵 Dirichlet boundary conditions Dirichlet边界条件Eeigenvalue 特征值 elliptic 椭圆形的 error estimate 误差估计 exact solution 精确解Ggeneralized Neumann boundary condition 推广的Neumann边界条件 geometry 几何形状geometry description matrix 几何描述矩阵 geometry matrix 几何矩阵 graphical user interface(GUI)图形用户界面Hhyperbolic 双曲线的Iinitial mesh 初始网格Jjiggle 微调LLagrange multipliers 拉格朗日乘子Laplace equation 拉普拉斯方程linear interpolation 线性插值 loop 循环Mmachine precision 机器精度 mixed boundary condition 混合边界条件NNeuman boundary condition Neuman边界条件 node point 节点 nonlinear solver 非线性求解器 normal vector 法向量PParabolic 抛物线型的 partial differential equation 偏微分方程 plane strain 平面应变 plane stress 平面应力 Poisson's equation 泊松方程 polygon 多边形 positive definite 正定Qquality 质量Rrefined triangular mesh 加密的三角形网格 relative tolerance 相对容限 relative tolerance 相对容限 residual 残差 residual norm 残差范数Ssingular 奇异的二、常用的数学英语表述1.Logic∃there exist∀for allp⇒q p implies q / if p, then qp⇔q p if and only if q /p is equivalent to q / p and q are equivalent2.Setsx∈A x belongs to A / x is an element (or a member) of Ax∉A x does not belong to A / x is not an element (or a member) of AA⊂B A is contained in B / A is a subset of BA⊃B A contains B / B is a subset of AA∩B A cap B / A meet B / A intersection BA∪B A cup B / A join B / A union BA\B A minus B / the diference between A and BA×B A cross B / the cartesian product of A and B3. Real numbersx+1 x plus onex-1 x minus onex±1 x plus or minus onexy xy / x multiplied by y(x - y)(x + y) x minus y, x plus yx y x over y= the equals signx = 5 x equals 5 / x is equal to 5x≠5x (is) not equal to 5x≡y x is equivalent to (or identical with) yx ≡ y x is not equivalent to (or identical with) yx > y x is greater than yx≥y x is greater than or equal to yx < y x is less than yx≤y x is less than or equal to y0 < x < 1 zero is less than x is less than 10≤x≤1zero is less than or equal to x is less than or equal to 1| x | mod x / modulus xx 2 x squared / x (raised) to the power 2x 3 x cubedx 4 x to the fourth / x to the power fourx n x to the nth / x to the power nx −n x to the (power) minus nx (square) root x / the square root of xx 3 cube root (of) xx 4 fourth root (of) xx n nth root (of) x( x+y ) 2 x plus y all squared( x y ) 2 x over y all squaredn! n factorialx ^ x hatx ¯ x barx ˜x tildex i xi / x subscript i / x suffix i / x sub i∑ i=1 n a i the sum from i equals one to n a i / the sum as i runs from 1 to n of the a i4. Linear algebra‖ x ‖the norm (or modulus) of xOA →OA / vector OAOA ¯ OA / the length of the segment OAA T A transpose / the transpose of AA −1 A inverse / the inverse of A5. Functionsf( x ) fx / f of x / the function f of xf:S→T a function f from S to Tx→y x maps to y / x is sent (or mapped) to yf'( x ) f prime x / f dash x / the (first) derivative of f with respect to xf''( x ) f double-prime x / f double-dash x / the second derivative of f with r espect to xf'''( x ) triple-prime x / f triple-dash x / the third derivative of f with respect to xf (4) ( x ) f four x / the fourth derivative of f with respect to x∂f ∂ x 1the partial (derivative) of f with respect to x1∂ 2 f ∂ x 1 2the second partial (derivative) of f with respect to x1∫ 0 ∞the integral from zero to infinitylimx→0 the limit as x approaches zerolimx→0 + the limit as x approaches zero from abovelimx→0 −the limit as x approaches zero from belowlog e y log y to the base e / log to the base e of y / natural log (of) ylny log y to the base e / log to the base e of y / natural log (of) y一般词汇数学mathematics, maths(BrE), math(AmE)公理axiom定理theorem计算calculation运算operation证明prove假设hypothesis, hypotheses(pl.)命题proposition算术arithmetic加plus(prep.), add(v.), addition(n.)被加数augend, summand加数addend和sum减minus(prep.), subtract(v.), subtraction(n.)被减数minuend减数subtrahend差remainder乘times(prep.), multiply(v.), multiplication(n.)被乘数multiplicand, faciend乘数multiplicator积product除divided by(prep.), divide(v.), division(n.)被除数dividend除数divisor商quotient等于equals, is equal to, is equivalent to 大于is greater than小于is lesser than大于等于is equal or greater than小于等于is equal or lesser than运算符operator数字digit数number自然数natural number整数integer小数decimal小数点decimal point分数fraction分子numerator分母denominator比ratio正positive负negative零null, zero, nought, nil十进制decimal system二进制binary system十六进制hexadecimal system权weight, significance进位carry截尾truncation四舍五入round下舍入round down上舍入round up有效数字significant digit无效数字insignificant digit代数algebra公式formula, formulae(pl.)单项式monomial多项式polynomial, multinomial系数coefficient未知数unknown, x-factor, y-factor, z-factor 等式,方程式equation一次方程simple equation二次方程quadratic equation三次方程cubic equation四次方程quartic equation不等式inequation阶乘factorial对数logarithm指数,幂exponent乘方power二次方,平方square三次方,立方cube四次方the power of four, the fourth power n次方the power of n, the nth power开方evolution, extraction二次方根,平方根square root三次方根,立方根cube root四次方根the root of four, the fourth root n次方根the root of n, the nth root集合aggregate元素element空集void子集subset交集intersection并集union补集complement映射mapping函数function定义域domain, field of definition值域range常量constant变量variable单调性monotonicity奇偶性parity周期性periodicity图象image数列,级数series微积分calculus微分differential导数derivative极限limit无穷大infinite(a.) infinity(n.)无穷小infinitesimal积分integral定积分definite integral不定积分indefinite integral有理数rational number无理数irrational number实数real number虚数imaginary number复数complex number矩阵matrix行列式determinant几何geometry点point线line面plane体solid线段segment射线radial平行parallel相交intersect角angle角度degree弧度radian锐角acute angle直角right angle钝角obtuse angle平角straight angle周角perigon底base边side高height三角形triangle锐角三角形acute triangle直角三角形right triangle直角边leg斜边hypotenuse勾股定理Pythagorean theorem钝角三角形obtuse triangle不等边三角形scalene triangle等腰三角形isosceles triangle等边三角形equilateral triangle四边形quadrilateral平行四边形parallelogram矩形rectangle长length宽width附:在一个分数里,分子或分母或两者均含有分数。
多维度动态增强核磁共振乳房区域全自动分割
多维度动态增强核磁共振乳房区域全自动分割黄丽娟;厉力华;范明【摘要】In this paper a multi-dimensional segmentation method based on the combination of breast-horizontal and breast-sagittal was proposed. This method is composed of three parts, which are the segmentation of breast-horizontal , the segmentation of breast-sagittal and the combination of those two segmentations.First, we obtained outer edge of breast by using a certain threshold .The interface of breast-pectoral is calculated based on its characteristics by implementing gradient algorithm .Second , we calculated segmentation curve by using bilateral filtering and edge extraction for pre-processing of image .Third, the segmentation results for sagittal are mapped to horizontal in proportion to the three-dimensional size .This method thus obtained optimized results by combining the results of these two levels according to the correlation between the adjacent images .Our segmentation method has been tested on 24 DCE-MRI studies.The results showed the mean percentages of overlay and volume difference compared with manual segmentation are 93.33%and 8.14%, respectively .%提出了一种基于乳房水平面与乳房矢状面相结合的多维度DCE-MRI乳房图像全自动分割方法。
fitteR软件包说明书
Package‘fitteR’October13,2022Type PackageTitle Fit Hundreds of Theoretical Distributions to Empirical DataVersion0.2.0Date2022-02-22Author Markus BoennMaintainer Markus Boenn<*************************>Description Systematicfit of hundreds of theoretical univariate distributions to empiri-cal data via maximum likelihood estimation.Fits are reported and summa-rized by a data.frame,a csvfile or a'shiny'app(here with additional features like visual repre-sentation offits).All output formats provide assessment of goodness-of-fit by the follow-ing methods:Kolmogorov-Smirnov test,Shapiro-Wilks test,Anderson-Darling test.License GPL(>=2)Depends R(>=3.3.0),methodsImports stats,utils,DT,shiny,dplyr,maxLik,R.utils,toolsSuggests actuar,ald,benchden,BiasedUrn,bridgedist,Davies,DiscreteInverseWeibull,DiscreteLaplace,DiscreteWeibull,emdbook,emg,EnvStats,evd,evir,ExtDist,extremefit,FAdist,FatTailsR,fBasics,fExtremes,flexsurv,gambin,gb,GenBinomApps,GeneralizedHyperbolic,gld,GLDEX,glogis,GSM,hermite,HyperbolicDist,KScorrect,loglognorm,marg,mc2d,minimax,msm,nCDunnett,NormalLaplace,normalp,ParetoPosStable,PearsonDS,poistweedie,polyaAeppli,qmap,QRM,ReIns,reliaR,Renext,revdbayes,RMKdiscrete,RMTstat,sadists,skellam,SkewHyperbolic,skewt,SMR,sn,stabledist,STAR,statmod,trapezoid,triangle,truncnorm,VarianceGammaNeedsCompilation noRepository CRANDate/Publication2022-02-2212:00:02UTCR topics documented:ecdf2 (2)12ecdf2fitter (3)printReport (4)pvalue2stars (6)supported.packages (7)Index8 ecdf2Calculate cumulative densityDescriptionCalculates the cumulative density of a set of numeric values.Usageecdf2(x,y=NULL)Argumentsx A numeric vector of which the ECDF should be calculatedy A numeric vector.See details for explanationDetailsThis function extends the functionality of of the standard implementation of ECDF.Sometimes it is desireable to get the ECDF from pre-tabulated values.For this,elements in x and y have to be linked to each other.ValueA listSee Alsoecdf for the standard implementation of ECDFExamplesx<-rnorm(1000)e<-ecdf2(x)str(e)plot(e)plot(e$x,e$cs)x<-sample(1:100,1000,replace=TRUE)plot(ecdf2(x))tab<-table(x)x<-unique(x)lines(ecdf2(x,y=tab),col="green")fitter3 fitter Fit distributions to empirical dataDescriptionFits theoretical univariate distributions from the R universe to a given set of empirical observationsUsagefitter(X,dom="discrete",freq=NULL,R=100,timeout=5,posList=NULL,fast=TRUE)ArgumentsX A numeric vectordom A string specifying the domain of‘X’freq The frequency of values in‘X’.See details.R An integer specifying the number of bootstraps.See details.timeout An numeric value specifying the maximum time spend for afitposList A list.See details.fast A logical.See details.DetailsThis routine is the workhorse of the package.It takes empirical data and systematically tries tofit numerous distributions implemented in R packages to this data.Sometimes the empirical data is passed as a histogram.In this case‘X’takes the support and‘freq’takes the number of occurences of each value in‘X’.Although not limited to,this makes most sense for discrete data.If there is prior knowledge(or guessing)about candidate theoretical distributions,these can be specified by ‘posList’.This parameter takes a list with names of items being the package name and items beinga character vector containing names of the distribtions(with prefix’d’).If all distributions of apackage should be applied,this vector is set to NA.Fitting of some distributions can be very slow.They can be skipped if‘fast’is set to TRUE.ValueA list serving as an unformatted report summarizing thefitting.NoteTo reduce the computational efforts,usage of the parameter‘posList’is recommended.If not specified,the function will try to performfits to distributions from_ALL_packages listed in supported.packages.Author(s)Markus BoennSee AlsoprintReport for post-processing of allfitsExamples#continous empirical datax<-rnorm(1000,50,3)if(requireNamespace("ExtDist")){r<-fitter(x,dom="c",posList=list(stats=c("dexp"),ExtDist=c("dCauchy")))}else{r<-fitter(x,dom="c",posList=list(stats=c("dexp","dt")))}#discrete empirical datax<-rnbinom(100,0.5,0.2)r<-fitter(x,dom="dis",posList=list(stats=NA))printReport Prepare report offittingDescriptionPrepares a summary of thefitting as csv or shinyUsageprintReport(x,file=NULL,type="csv")Argumentsx The output of fitterfile A character string giving thefilename(including path)where the report should be printedtype A character vector giving the desired type(s)of outputDetailsThe routine generates a simple csvfile,which is the most useful output in terms of reusability.However,the shiny output is more powerful and provides an overview of the statistics and afigure for visual/manual exploration of thefits.Irrspective of output type being“csv”or“shiny”,the fit-table has the following formatpackage package namedistr name of the distributionnargs number of parametersargs names of parameters,comma-seperated listestimate estimated values of parameters,comma-seperated liststart start values of parameters,comma-seperated listconstraints were constraints used,logicalruntime the runtime in millisecondsKS test statistic$D$of a two-sided,two-sample Kolmogorov-Smirnov testpKS$P$-value of a two-sided,two-sample Kolmogorov-Smirnov testSW test statistic of a Shapiro-Wilks testpSW$P$-value of a Shapiro-Wilks testValueA list with itemstable A data.frame with the same formating as the resulting csvfile.shiny if"shiny"%in%type:a shiny objectAuthor(s)Markus BoennExamples#discrete empirical datax<-rnbinom(100,0.5,0.2)r<-fitter(x,dom="dis",posList=list(stats=NA))#create only shiny appout<-printReport(r,type="shiny")names(out)##Not run:out$shinyout<-printReport(r,type=c("csv"))#warning as file is NULL,str(out)#but table(data.frame)returned6pvalue2stars pvalue2stars Significance starsDescriptionGet stars indicating the magnitude of significance of a P-value.Usagepvalue2stars(x,ns="")pvalues2stars(x,ns="")Argumentsx Numeric value or numeric vector,typically a P-value from a statistical test.ns A character string specifying how insignificant results should be marked.Empty string by default.DetailsWhile the function pvalue2stars accepts only a single value,the function pvalues2stars is a wrapper calling pvalue2stars for a vector.The range of x is not checked.However,a check is done,if x is numeric at all.ValueString(s)of stars or points.Author(s)Markus BoennExamplesx<-runif(1,0,1)pvalue2stars(x)x<-0.5pvalue2stars(x,ns="not signif")x<-c(0.0023,0.5,0.04)pvalues2stars(x,ns="not signif")supported.packages7 supported.packages Supported packagesDescriptionGet a list of currently supported packagesUsagesupported.packages()DetailsNumerous R-packages are supported,each providing a couple of theoretical statistical distributions for discrete or continuous data.Beside ordinary distributions like normal,t,exponential,...,some packages implement more exotic distributions like truncrated alpha.ValueA character vectorNoteSome of the distributions are redundant,i.e.they are implemented in more than one package. Author(s)Markus BoennExamplessp<-supported.packages()head(sp)Indexecdf,2ecdf2,2fitter,3,4printReport,4,4pvalue2stars,6pvalues2stars(pvalue2stars),6 supported.packages,4,78。
,B.T.Thomas
Texture Deconvolution for the Fourier-Based Analysis of Non-Rectangular RegionsA.A.Clark†,B.T.Thomas†,N.W.Campbell†,P.Greenway§†Advanced Computing Research Centre,University of Bristol,Bristol,BS81UB,UK.§British Aerospace PLC,P.O.Box5,Filton,Bristol,BS127QW,UK.Angus.Clark@AbstractFourier analysis is often used as a tool to facilitate the extraction of tex-ture information from image data.However,in situations where the texturepatch does not entirelyfill the region of analysis,information relating to theshape of the patch becomes entwined with its texture content,thus contami-nating the Fourier spectrum and corrupting the texture information.We pro-pose the use of a frequency deconvolution algorithm to remove the artefactsintroduced by shape components,permitting the Fourier-based analysis ofnon-rectangular image patches.The algorithm is demostrated on a texturerecognition task involving the entire Brodatz album.1IntroductionTexture is an important source of information for a number of computer vision and graph-ics applications.Although it is difficult to construct a formal definition,texture is intu-itively related to luminance variations in the image and can be characterised using prop-erties such as regularity,coarseness,contrast,local and global structure,and directional-ity[4,14].A system that is required to deliver meaningful judgements concerning texture, needs to be able to extract a description of the image data in a form that explicitly captures these properties.It has been suggested that the Fourier domain may provide a more favourable envi-ronment in which to work and a number of extensions,aimed at exploiting the Fourier representation of texture information,have been proposed[1,6,8].However,many of these techniques have been demonstrated using tasks involving only rectangular texture patches.In real-world environments,we are confronted by a wide range of different ob-ject shapes with further variation introduced by the effects of occlusion.Due to the nature of the Fourier analysis,if the texture patch we wish to analyse does not entirelyfill the region of analysis,then information relating to the shape of the patch becomes entwined with its texture content thus contaminating the Fourier spectrum and corrupting the tex-ture information.In this study,we investigate how the shape of the patch effects its Fourier representa-tion,and what the repercussions might be for the overall analysis of the texture.We go on to propose the use of a frequency deconvolution algorithm to remove these effects andprovide a shape-invariant Fourier description of the texture content of non-rectangular im-age patches.The approach is based on the C LEAN deconvolution algorithm[5],originally developed for aperture synthesis in radio astronomy.We adapt the algorithm,and show how it can be applied to the analysis of non-rectangular texture patches.2Texture RecognitionIn this section a texture recognition system is described.The decision process is driven by features derived from the principal components analysis of the Fourier domain[8].The system is used here to investigate the effects of shape variation on Fourier-based texture analysis and to demonstrate the ability of the deconvolution algorithm to remove these effects.2.1The Texture DatabaseThe texture patches used in this study were taken from the Brodatz album[2].The album consists112pictures of natural textures and represents a diverse range of texture proper-ties.Each picture in the album was scanned to produce a640640,8-bit grey level image. From each of these source images,nine non-overlapping128128sub-regions were ex-tracted to form a set of1008square texture patches.A similar set of non-rectangular patches was also constructed.Each patch here was extracted using a mask chosen at ran-dom from a set of10possible shapes.As before,nine non-overlapping sub-regions were extracted from each source image.To insure a fair comparison between the two sets,the size of each shape mask was controlled such that the same surface area of texture was revealed as that of the square patches.Examples of the texture patches,taken from both sets,are shown in Figure1.Figure1:Examples from the Texture Database:square patches(top);non-rectangular patches(bottom).2.2Texture FeaturesThe texture features are derived from the principal components analysis(PCA)of the Fourier domain as described in[8].For the purpose of completeness,we summarise the method here.From the set of square texture patches,ten percent(p100)are chosen at random to provide a“training”set for the analysis.For each image in the training set,the Dis-crete Fourier Transform(DFT)is computed and a1-d vector,representing the magnitudecomponents in raster-scan order,is formed:x i R n1i1p:n1282p100. To minimise the effects caused by boundary discontinuities,each image was mixed witha Gaussian window(σ24)prior to the transform.The covariance matrix is calculatedfrom:C1p ∑p i1x i.The eigenvectors,q i,and corresponding eigenvalues,λi,are determined by the character-istic equation:Cq j XX q jλj(1) Using the fact that there can be at most p eigenvectors and p n,computation can be saved byfirst performing the eigen analysis on the inner product:X Xu jλj u j(2) Pre-multiplying both sides of2by X gives:XX Xu jλj Xu j(3) From Equation3,it is observed that the eigenvectors for the original covariance matrix, XX,are given by q j Xu j.The eigenvectors represent an alternative orthogonal basis whereby the importance of each axis,in terms of the variance it accounts for,is given by its corresponding eigenvalue. The vectors,ordered by their eigenvalues,reveal the principal modes of variation.The texture features are formed by projecting the DFT magnitudes onto the principal modes.In the experiments reported here,thefirst40modes were used as texture features. Accounting for over85%of the variance found in the training set,this choice provides a compact,yet expressive representation of power spectra typical to those generated by textures.3Shape Variation on Texture Analysis3.1ExperimentsA recognition task is set up to determine the effect of introducing arbitrarily-shaped im-age patches on Fourier-based texture analysis.For a given patch,the system is required to decide which of the112Brodatz texture it most closely corresponds to.The classifica-tion is performed using a k-nearest-neighbour paradigm,based on the Euclidean distance between feature vectors.For each of the112textures,4member patches are used to form an estimate of the class distributions over the feature space.These patches are excluded from the rest of the analysis.In the experiments reported here,k was set to3.The system was applied to both sets of texture patches in turn.3.2Results and DiscussionThe overall classification accuracy attained for each set is shown in Figure2,with indi-vidual results given for a selection of texture classes.For the set of square texture patches, over76%were correctly classified,achieving a level of performance similar to those re-ported by other Fourier-based techniques.The fact that a relatively simple classification paradigm was used,suggests that the feature vector is largely responsible for the per-formance of the system.On closer inspection of the misclassifications,further evidence is revealed supporting the ability of the feature vector to capture perceptual concepts of texture.For example,although one D022patch was misclassified as D003,both are ex-amples of snake skin.It is,in fact,reassuring tofind that these two perceptually similar textures generate measurably close feature vectors,despite the confusion and misclassifi-cation that arises.Further incidents of misclassification were found for patches belongingFigure2:Effects of shape variation on texture recognitionto large scale textures.It was observed that in theses case,the underlying pattern often spans the entire source image,and as a result,each extracted patch only contains part of the texture.Given these circumstances,it is unreasonable to expect the system to deliver an accurate classification when only partial information is available.Indeed,it is perhaps undesirable to match these patches which appear so unrelated,even if they do originate from the same source.As an overall observation,it was found that regular textures enjoyed a slight advantage in classification accuracy.Turning to the results found for the set of non-rectangular texture patches,it is clear that introducing shape variations can severely disturb the analysis of texture patches.The overall classification dropped by over45%to only30%correctly classified,with deterio-ration in accuracy noted across almost all classes.The textures feature have shown to be vulnerable to the artefacts introduced by the shape components,responding to properties relating to the shape of the patch instead of its texture content.Although we have used PCA to extract texture features from theFourier domain,it is likely that other Fourier-based techniques would suffer a similar fate when confronted by arbitrarily-shaped regions.4Frequency Deconvolution4.1MotivationThe experiment conducted in the previous section provides a vivid demonstration of the detrimental effects that shape variations can have on the analysis of texture patches.If Fourier-based techniques are to deliver meaningful judgements concerning the texture content of non-rectangular regions,then a spectral representation of the texture needs to be extracted that is invariant to the shape of the patch.This remains a challenging under-taking for two reasons.Firstly,a fundamental requirement of the DFT dictates that data samples must be contiguous,which for the case of two-dimensional images,corresponds to a rectangular region of pixel intensities.1Secondly,Fourier analysis regards spectral composition as a global phenomenon,and components detected via the transform are assumed to exist over the full extent of the image.In the remainder of this section,we propose the use of a deconvolution algorithm to remove the artefacts introduced by the shape components,and thus provide a shape-invariant Fourier description of the texture content of non-rectangular image patches. 4.2BackgroundDeconvolution is based on the following idea.A non-rectangular texture patch,i x y,is assumed to be the product of applying a binary shape mask,w x y,to an uncorrupted version of the texture,t x y.By the convolution theorem,we can write:i x y t x y w x y DFT T u v W u v I u v(4) If this convolution can in some way be undone,then we can get back to the Fourier representation of the uncorrupted version texture:T u v I u v1W u v(5)where A1B denotes the deconvolution of A with respect to B.Deconvolution,however is ill-posed.The solution is not unique,and the problem be-comes that of choosing a plausible value from the set of possible solutions.Deconvolution algorithms attempt to make an informed guess as to the value of unobservable samples on the basis of what information is available.A number of deconvolution algorithms have been proposed.Maximum Entropy, Lucy-Richardson,and C LEAN are amongst the most commonly used techniques.Max-imum entropy methods(e.g.[12])attempt to select the solution that is the most consis-tent with the observable data,while simultaneously providing maximum entropy.Here, entropy is defined as an abstract concept which when maximised,produces a positive image with a compressed range of pixel values.This effectively introduces a smooth-ness constraint which favours the image that assumes the least about missing data.TheLucy-Richardson deconvolution algorithm[7]employs Bayesian inference to maximise the likelihood of the reconstructed image.Successive estimates are applied to iteratively improve the appearance of the image.Our approach is based on the C LEAN deconvolution algorithm,originally developed for aperture synthesis in radio astronomy[5].Various refinements,initiated by[3],have since been proposed[10],and boast improved computational efficiency when dealing with large synthesis arrays.Others[9,13]have tailored the algorithm towards the analysis of specialised forms of corrupted data.Our specific interest lies in the methodology sur-rounding C LEAN deconvolution.The algorithm focuses on restoring individual spectral elements and has since been shown to be equivalent to a least-squaresfitting of sinusoids in the spatial domain[11].This approach is particularly appealing for our work on texture analysis,as the reconstruction is based on the very elements we wish to examine.4.3Deconvolution AlgorithmIt is assumed that the pure texture can be modelled by a number of harmonic components. The deconvolution proceeds by detecting which component,when corrupted by the shape mask,would provide the best explanation of the patterns observed in Fourier spectra. The component is then re-located to the clean spectrum,and its footprint erased from the corrupted spectrum.The process is repeated until only noise residuals are left in the corrupted spectrum.We now formulate the algorithm for the case of1-d signals,and later extend the result to accommodate2-d texture images.Consider the texture pattern,t x,comprising a single sinusoidal component,having amplitude A,frequencyˆu,and phaseφ:t x A cos2πˆuxφ(6) The uncorrupted Fourier representation,T u,of this texture pattern is given by:T u aδuˆu aδuˆu(7) where a AWe can therefore determine the frequency of the original sinusoid,ˆu,by locating the peak component pair of I u.Before we can recover the amplitude and phase information,we need to consider how the spectral leakage caused by one peak contributes to the overall response observed at the other peak.Substituting for uˆu in Equation8gives:Iˆu aW0a W2ˆu(9) Using the conjugate symmetry,we rearrange to give:aIˆu W0Iˆu W2ˆuW2W2ˆu iˆv i22.Generate the i th residual spectrum,R i u v R i1a i W uˆu i vˆv i a i W uˆu i vˆv i3.If the peak residual of R i u v falls below the noise threshold,or if the maximumnumber of iterations has been reached,then proceed to step4.If these termination conditions are not met,then continue cleaning from step14.Construct the clean spectrum,C,from the K clean components:C u vK∑i1a iδuˆu i vˆv i a iδuˆu i vˆv iThe algorithm is demonstrated in Figure3.Each of the non-rectangular texture patches has been deconvolved,and the result returned to the spatial domain for inspection.The left texture patch comprises a single harmonic component,and can be fully reconstructed by a single iteration(assuming the clean gain is set to1.0),with zero error in the remain-ing residual.The other patches exhibit more realistic textures,comprising many spectral components.Although several iterations are required,thefinal reconstruction is impres-sive.Figure3:Texture Deconvolution.Arbitrarily-shaped texture patches(top),deconvolved to recover the pure texture image(bottom).5Deconvolution in Texture Analysis5.1ExperimentTo assess the ability if the deconvolution algorithm to remove the detrimental effects of shape variation on the analysis of texture,the experiment described in section3was repeated.Deconvolution is applied to the Fourier transform prior to the projection of the magnitude components onto the texture features.The gain,g,and the noise threshold,t, were set to0.5and1.0respectively.The maximum number of iterations,I,was set to500, although in practice this limit was rarely reached.In all other respects,the experiment was conducted as before.5.2Results and DiscussionThe results of this experiment,together with the previousfindings,are shown in Figure4. It was found that the recognition system,applied to non-rectangular patches,performed significantly better when augmented with the deconvolution algorithm.Overall classifi-cation accuracy was improved by40%to over70%.As predicted,the greatest improvements were found for patches exhibiting regular textures.It is perhapsfitting that the deconvolution algorithm is of most benefit for thoseFigure4:Texture recognition of deconvolved patchestextures that Fourier analysis is best equipped to deal with.The results suggests that the deconvolution algorithm is capable of removing the arte-facts introduced by shape components,and allows the extraction of a set of texture fea-tures that is largely invariant to shape variations.By including the deconvolution algo-rithm,the performance of the system is restored to a level approaching that attained for square patches.We anticipate that similar benefits would be observed for other applications that make use of Fourier-based techniques for the analysis of image regions(e.g.region-based image coding,object recognition,extraction of texture maps from real image data,etc...).6ConclusionsFourier analysis is a valuable tool for extracting texture information from image data,and a number of extensions have been proposed in the literature.However,due to the nature of the Fourier transform,many of these techniques are only applicable to the analysis of rectangular image patches.In situations where the texture patch does not entirelyfill the region of analysis,information relating to the shape of the patch becomes entwined with its texture content thus contaminating the Fourier spectrum and corrupting the texture in-formation.We have shown that Fourier-based texture features are indeed vulnerable to the artefacts introduced by shape components,responding to properties relating to the shape of the patch instead of its texture content.We have proposed the use of a deconvolution algorithm to remove these artefacts.The algorithm provides a shape-invariant Fourier description of the patch,permitting the extraction of robust texture features from non-rectangular image patches.In a texture recognition task involving the entire Brodatz set, classification performance was restored to a level similar to that attained for rectangular patches.We anticipate that similar benefits would be observed for other applications that make use of Fourier-based techniques.References[1]Alan Conrad Bovik,Marianna Clark,and Wilson S.Geisler.Multichannel textureanalysis using localized spatialfilters.IEEE Trans.Pattern Analysis and Machine Intelligence,PAMI-12(1):55–73,January1990.[2]Phil Brodatz.Textures:A Photographic Album for Artists and Designers.Dover,NY,1966.[3]B.G.Clark.An efficient implementation of the algorithm CLEAN.Astronomy andAstrophysics,89:377–378,1980.[4]Robert M.Haralick.Statistical and structural approaches to texture.Proceedings ofthe IEEE,67(5):786–804,May1979.[5]J.A.Hogbom.Aperture synthesis with a non-regular distribution of interferometerbaselines.Astronomy and Astrophysics,Supplement,15:417–426,1974.[6]Fang Liu and Rosalind W.Picard.Peridicity,directionality,and randomness:Woldfeatures for image modeling and retrieval.IEEE Trans.Pattern Analysis and Ma-chine Intelligence,PAMI-18(7):722–733,July1996.[7]L.B.Lucy.An iterative technique for the rectification of observed images.TheAstronomical Journal,79(6):745–754,1974.[8]Rosalind W.Picard and T.Kabir.Finding similar patterns in large image databases.In Proc.Int.Conf.Acoust.Speech,Signal Proc.,volume V,pages161–164,Min-neapolis,MN,April1993.[9]David H.Roberts,Joseph Leh´a r,and John W.Dreher.Time series analysis withCLEAN.I.derivation of a spectrum.The Astronomical Journal,93(4):968–989, 1987.[10]F.R.Schwab.Relaxing the isoplanatism assumption in self-calibration:Applica-tions to low-frequency radio interferometry.The Astronomical Journal,89:1076–1081,1984.[11]U.J.Schwarz.Mathematical-statistical description of the iterative beam removingtechnique Method CLEAN.Astronomy and Astrophysics,65:345–356,1978. [12]J.Skilling and R.K.Bryan.Maximum entropy image reconstruction:General algo-rithm.Monthly Notices of the Royal Astronomical Society,211:111–124,1984. [13]D.G.Steer,P.E.Dewdney,and M.R.Ito.Enhancements to the decovolution algo-rithm CLEAN.Astronomy and Astrophysics,137:159–165,1984.[14]Hideyuki Tamura,Shunji Mori,and Takashi Yamawaki.Textural features cor-responding to visual perception.IEEE Trans.Systems,Man,and Cybernetics, 8(6):460–473,1978.。
functio-to-form mapping功能形式匹配理论
e.g. what did you do the day before?
• I play soccer. → a beginning learner
Conveying meaning relies on context
• Yesterday I play soccer. → an advanced learner
Conclusion
According to this approach, language
acquisition importantly involves developing linguistic forms to fulfill semantic or pragmatic functions. Grammarticalization is driven by communicative need and use and is related to the development of more efficient cognitive processing as part of language learning.
Additional developmental contrasts:
• From topic-comment to subject-predicate structure.
• From loose conjunction (with elements merely juxtaposed or connected with and) to tight subordination (with elements connected by words like since or because).
relies more on formal grammatical elements.