A probability hypothesis density-based multitarget tracker using multiple bistatic range an
高斯粒子 PHD 滤波的多个弱小目标 TBD 算法
高斯粒子 PHD 滤波的多个弱小目标 TBD 算法李翠芸;曹潇男;廖良雄;江舟【摘要】针对现有多个弱小目标检测前跟踪(track-before-detect,TBD)算法存在的跟踪精度低,算法复杂度高等问题,提出一种新的基于概率假设密度(probability hypothesis density,PHD)的 TBD 算法。
所提算法通过高斯粒子滤波对 PHD 中的各高斯项进行递归运算、进行多帧能量累积,并提取高斯项的均值为目标的状态,达到检测与跟踪多个弱小目标的目的。
算法在随机集滤波框架下完成未知数目的多个弱小目标跟踪,不仅充分利用粒子滤波的非线性估计能力,同时避免了传统算法利用模糊聚类进行目标状态提取所带来的跟踪精度低等问题。
仿真结果表明,所提算法与传统方法相比,在降低算法复杂度的同时,对多个红外弱小目标具有更加良好的实时检测和跟踪性能。
%In order to avoid the low tracking accuracy and high complexity problems in the conventional al-gorithms,a novel track-before-detect algorithm based on probability hypothesis density (PHD)filter is pro-posed for the tracking and detection of the multiple dim targets in the infrared image.With the Gaussian particle filter,the Gaussian components in PHD can be operated recursively and extracted as the states of targets.The algorithm can realize the tracking and detection of the multiple dim targets by the energy accumulation.With the theory of the random finite set,the algorithm performs the multiple dim targets tracking with unknown num-ber.It can not only make use of the nonlinear estimation ability of the particle filter but also avoid the tracking inaccuracy which is brought by the fuzzy clustering.Simulation results with the infrared images show that the proposed algorithm has thelow complexity and the better performance in the detection and tracking multiple dim targets than the conventional algorithm.【期刊名称】《系统工程与电子技术》【年(卷),期】2015(000)004【总页数】6页(P740-745)【关键词】检测前跟踪;概率假设密度;高斯粒子滤波;红外图像;多目标跟踪【作者】李翠芸;曹潇男;廖良雄;江舟【作者单位】西安电子科技大学电子工程学院,陕西西安 710071;西安电子科技大学电子工程学院,陕西西安 710071;西安电子科技大学电子工程学院,陕西西安 710071;西安电子科技大学电子工程学院,陕西西安 710071; 中国人民解放军95972 部队,甘肃酒泉 735018【正文语种】中文【中图分类】TN953近年来,基于红外探测和成像的武器系统成为各国军事领域研究的重点。
PValue - 用于判定拟合效果的重要指标
What is a P-value?I have found that many students are unsure about the interpretation of P-values and other concepts related to tests of significance. These ideas are used repeatedly in various applications so it is important that they be understood. I will explain the concepts in general terms first, then their application in the problem of assessing normality.We wish to test a null hypothesis against an alternative hypothesis using a dataset. The two hypotheses specify two statistical models for the process that produced the data. The alternative hypothesis is what we expect to be true if the null hypothesis is false. We cannot prove that the alternative hypothesis is true but we may be able to demonstrate that the alternative is much more plausible than the null hypothesis given the data. This demonstration is usually expressed in terms of a probability (a P-value) quantifying the strength of the evidence against the null hypothesis in favor of the alternative.We ask whether the data appear to be consistent with the null hypothesis or whether it is unlikely that we would obtain data of this kind if the null hypothesis were true, assuming that at least one of the two hypotheses is true. We address this question by calculating the value of a test statistic, i.e., a particular real-valued function of the data. To decide whether the value of the test statistic is consistent with the null hypothesis, we need to know what sampling variability to expect in our test statistic if the null hypothesis is true. In other words, we need to know the null distribution, the distribution of the test statistic when the null hypothesis is true. In many applications, the test statistic is defined so that its null distribution is a “named” distribution for which tables are widely accessible; e.g., the standard normal distribution, the Binomial distribution with n = 100 and p = 1/2, the t distribution with 4 degrees of freedom, the chi-square distribution with 23 degrees of freedom, the F distribution with 2 and 20 degrees of freedom.Now, given the value of the test statistic (a number), and the null distribution of the test statistic (a theoretical distribution usually represented by a probability density), we want to see whether the test statistic is in the middle of the distribution (consistent with the null hypothesis) or out in a tail of the distribution (making the alternative hypothesis seem more plausible). Sometimes we will want to consider the right-hand tail, sometimes the left-hand tail, and sometimes both tails, depending on how the test statistic and alternative hypothesis are defined. Suppose that large positive values of the test statistic seem more plausible under the alternative hypothesis than under the null hypothesis. Then we want a measure of how far out our test statistic is in the right-hand tail of the null distribution. The P-value provides a measure of this distance. The P-value (in this situation) is the probability to the right of our test statistic calculated using the null distribution. The further out the test statistic is in the tail, the smaller the P-value, and the stronger the evidence against the null hypothesis in favor of the alternative.The P-value can be interpreted in terms of a hypothetical repetition of the study. Suppose the null hypothesis is true and a new dataset is obtained independently of the first dataset but using the same sampling procedure. If the new dataset is used to calculate a new value of the test statistic (same formula but new data), what is the probability that the new value will be further out in the tail (assuming a one-tailed test) than the original value? This probability is the P-value.The P-value is often incorrectly interpreted as the probability that the null hypothesis is true. Try not to make this mistake. In a frequentist interpretation of probability, there is nothing random about whether the hypothesis is true, the randomness is in the process generating the data. One can interpret “the probability that the null hypothesis is true” using subjective probability, a measure of one’s belief that the null hypothesis is true. One canthen calculate this subjective probability by specifying a prior probability (subjective belief before looking at the data) that the null hypothesis is true, and then use the data and the model to update one’s subjective probability. This is called the Bayesian approach because Bayes’ Theorem is used to update subjective probabilities to reflect new information. When reporting a P-value to persons unfamiliar with statistics, it is often necessary to use descriptive language to indicate the strength of the evidence. I tend to use the following sort of language. Obviously the cut-offs are somewhat arbitrary and another person might use different language.P > 0.10No evidence against the null hypothesis. The data appear to beconsistent with the null hypothesis.0.05 < P < 0.10Weak evidence against the null hypothesis in favor of the alternative.0.01 < P < 0.05Moderate evidence against the null hypothesis in favor of thealternative.0.001 < P < 0.01Strong evidence against the null hypothesis in favor of thealternative.P < 0.001Very strong evidence against the null hypothesis in favor of thealternative.In using this kind of language, one should keep in mind the difference between statistical significance and practical significance. In a large study one may obtain a small P-value even though the magnitude of the effect being tested is too small to be of importance (see the discussion of power below). It is a good idea to support a P-value with a confidence interval for the parameter being tested.A P-value can also be reported more formally in terms of a fixed level α test. Here α is a number selected independently of the data, usually 0.05 or 0.01, more rarely 0.10. We reject the null hypothesis at level α if the P-value is smaller than α, otherwise we fail to reject the null hypothesis at level α. I am not fond of this kind of language because it suggests a more definite, clear-cut answer than is often available. There is essentially no difference between a P-value of 0.051 and 0.049. In some situations it may be necessaryto proceed with some course of action based on our belief in whether the null or alternative hypothesis is true. More often, it seems better to report the P-value as a measure of evidence.A fixed level α test can be calculated without first calculating a P-value. This is done by comparing the test statistic with a critical value of the null distribution corresponding to the level α. This is usually the easiest approach when doing hand calculations and using statistical tables, which provide percentiles for a relatively small set of probabilities. Most statistical software produces P-values which can be compared directly with α. There is no need to repeat the calculation by hand.Fixed level α tests are needed for discussing the power of a test, a useful concept when planning a study. Suppose we are comparing a new medical treatment with a standard treatment, the control. The null hypothesis is that of no treatment effect (no difference between treatment and control). The alternative hypothesis is that the treatment effect (mean difference of treatment minus control using some outcome variable) is positive. We want to have good chance of reporting a small P-value assuming the alternative hypothesisis true and the magnitude of the effect is large enough to be of practical importance. The power of a level α test is defined to be the probability that the null hypothesis will berejected at level α (i.e., the P-value will be less than α) assuming the alternative hypothesis is true. The power generally depends on the variability of the data (lower variance, higher power), the sample size (higher n, higher power), and the magnitude of the effect (larger effect, higher power).Assessing normality using the Ryan-Joiner test.Null hypothesis: the data {x 1, ..., x n } are a random sample of size n from a normal distribution.Alternative hypothesis: the data are a random sample from some other distribution.Test statistic: r = the correlation between the data and the normal scores.The normal scores are defined by the following graph.i{rank(x i Rationale: If the data are a sample from a normal distribution then the normal probability plot (plot of normal scores against the data) will be close to a straight line, and the correlation r will be close to 1. If the data are sampled from a non-normal distribution then the plot may show a marked deviation from a straight line, resulting in a smaller correlation r . Smaller values of r are therefore regarded as stronger evidence against the null hypothesis.Null distribution of r : I do not know whether this distribution has a name. We might call it the Ryan-Joiner distribution, corresponding to the name of the test. The density will be skewed to the left, with most of the probability close to 1, as in the picture below.P-value: The probability to the left of the observed correlation r calculated using the null distribution; i.e., the area under the density to the left of r. You do not need to know how to calculate this. Minitab does the calculation for you.Interpretation: If you want to use simple descriptive language, you can use the table above.The strength of evidence is described directly in terms of the P-value.r1。
融合顺序敏感的多传感器GM-PHD跟踪算法研究
融合次序敏感的多传感器GM-PHD跟踪算法探究Mixture Probability Hypothesis Density)跟踪算法。
该算法接受了多个传感器的数据融合,实现了对目标的跟踪和位置猜测,并通过次序敏感方法改进了算法的鲁棒性和跟踪精度。
试验表明,本算法能够有效跟踪目标,缩减“传感器失联”、“传感器漂移”的影响,在多传感器跟踪领域有较大的应用前景。
关键词:多传感器;GM-PHD;次序敏感;目标跟踪;数据融合一、引言近年来,随着传感器技术的飞速进步,多传感器目标跟踪技术也得到了迅猛的进步。
多传感器目标跟踪技术利用多个传感器得到的数据,对目标进行多源信息融合,可大大提高目标跟踪的准确性和鲁棒性。
然而,在多传感器目标跟踪中,“传感器失联”、“传感器漂移”等问题也变得突出,这些问题会导致目标的跟踪精度降低,甚至失效。
为了提高多传感器目标跟踪的鲁棒性和精度,本文基于GM-PHD(Gaussian Mixture Probability Hypothesis Density)跟踪算法,提出了一种次序敏感的多传感器跟踪算法。
在此基础上,接受了多源数据融合技术,实现了对目标的多源信息得到和位置猜测,显著提高了跟踪精度和鲁棒性。
二、多传感器跟踪算法探究2.1 GM-PHD算法原理GM-PHD算法是一种基于概率密度的目标跟踪算法,它使用高斯混合模型(Gaussian Mixture Model)来描述目标的位置和速度信息。
GM-PHD算法的核心思想是基于观测数据和历史轨迹来推断目标状态。
2.2 多传感器跟踪算法构建本文针对已有的多传感器跟踪算法进行优化,起首接受数据融合技术,实现了多个传感器数据的汇聚和处理。
然后针对传感器失联和漂移等问题,提出了一种次序敏感的算法。
该算法能够在传感器失联等状况下,自适应调整跟踪模型,提高跟踪精度和鲁棒性。
三、试验结果与分析为了验证本文提出的多传感器次序敏感GM-PHD算法的有效性,我们进行了模拟试验和真实数据试验。
概率与统计英语
《概率论与数理统计》基本名词中英文对照表英文中文Probability theory 概率论mathematical statistics 数理统计deterministic phenomenon 确定性现象random phenomenon 随机现象sample space 样本空间random occurrence 随机事件fundamental event 基本事件certain event 必然事件impossible event 不可能事件random test 随机试验incompatible events 互不相容事件frequency 频率classical probabilistic model 古典概型geometric probability 几何概率conditional probability 条件概率multiplication theorem 乘法定理Bayes's formula 贝叶斯公式Prior probability 先验概率Posterior probability 后验概率Independent events 相互独立事件Bernoulli trials 贝努利试验random variable 随机变量probability distribution 概率分布distribution function 分布函数discrete random variable 离散随机变量distribution law 分布律hypergeometric distribution 超几何分布random sampling model 随机抽样模型binomial distribution 二项分布Poisson distribution 泊松分布geometric distribution 几何分布probability density 概率密度continuous random variable 连续随机变量uniformly distribution 均匀分布exponential distribution 指数分布numerical character 数字特征mathematical expectation 数学期望variance 方差moment 矩central moment 中心矩n-dimensional random variable n-维随机变量two-dimensional random variable 二维离散随机变量joint probability distribution 联合概率分布joint distribution law 联合分布律joint distribution function 联合分布函数boundary distribution law 边缘分布律boundary distribution function 边缘分布函数exponential distribution 二维指数分布continuous random variable 二维连续随机变量joint probability density 联合概率密度boundary probability density 边缘概率密度conditional distribution 条件分布conditional distribution law 条件分布律conditional probability density 条件概率密度covariance 协方差dependency coefficient 相关系数normal distribution 正态分布limit theorem 极限定理standard normal distribution 标准正态分布logarithmic normal distribution 对数正态分布covariance matrix 协方差矩阵central limit theorem 中心极限定理Chebyshev's inequality 切比雪夫不等式Bernoulli's law of large numbers 贝努利大数定律statistics 统计量simple random sample 简单随机样本sample distribution function 样本分布函数sample mean 样本均值sample variance 样本方差sample standard deviation 样本标准差sample covariance 样本协方差sample correlation coefficient 样本相关系数order statistics 顺序统计量sample median 样本中位数sample fractiles 样本极差sampling distribution 抽样分布parameter estimation 参数估计estimator 估计量estimate value 估计值unbiased estimator 无偏估计unbiassedness 无偏性biased error 偏差mean square error 均方误差relative efficient 相对有效性minimum variance 最小方差asymptotic unbiased estimator 渐近无偏估计量uniformly estimator 一致性估计量moment method of estimation 矩法估计maximum likelihood method of estimation 极大似然估计法likelihood function 似然函数maximum likelihood estimator 极大似然估计值interval estimation 区间估计hypothesis testing 假设检验statistical hypothesis 统计假设simple hypothesis 简单假设composite hypothesis 复合假设rejection region 拒绝域acceptance domain 接受域test statistics 检验统计量linear regression analysis 线性回归分析1 概率论与数理统计词汇英汉对照表Aabsolute value 绝对值accept 接受acceptable region 接受域additivity 可加性adjusted 调整的alternative hypothesis 对立假设analysis 分析analysis of covariance 协方差分析analysis of variance 方差分析arithmetic mean 算术平均值association 相关性assumption 假设assumption checking 假设检验availability 有效度average 均值Bbalanced 平衡的band 带宽bar chart 条形图beta-distribution 贝塔分布between groups 组间的bias 偏倚binomial distribution 二项分布binomial test 二项检验Ccalculate 计算case 个案category 类别center of gravity 重心central tendency 中心趋势chi-square distribution 卡方分布chi-square test 卡方检验classify 分类cluster analysis 聚类分析coefficient 系数coefficient of correlation 相关系数collinearity 共线性column 列compare 比较comparison 对照components 构成,分量compound 复合的confidence interval 置信区间consistency 一致性constant 常数continuous variable 连续变量control charts 控制图correlation 相关covariance 协方差covariance matrix 协方差矩阵critical point 临界点critical value 临界值crosstab 列联表cubic 三次的,立方的cubic term 三次项cumulative distribution function 累加分布函数curve estimation 曲线估计Ddata 数据default 默认的definition 定义deleted residual 剔除残差density function 密度函数dependent variable 因变量description 描述design of experiment 试验设计deviations 差异df.(degree of freedom) 自由度diagnostic 诊断dimension 维discrete variable 离散变量discriminant function 判别函数discriminatory analysis 判别分析distance 距离distribution 分布D-optimal design D-优化设计Eeaqual 相等effects of interaction 交互效应efficiency 有效性eigenvalue 特征值equal size 等含量equation 方程error 误差estimate 估计estimation of parameters 参数估计estimations 估计量evaluate 衡量exact value 精确值expectation 期望expected value 期望值exponential 指数的exponential distributon 指数分布extreme value 极值Ffactor 因素,因子factor analysis 因子分析factor score 因子得分factorial designs 析因设计factorial experiment 析因试验fit 拟合fitted line 拟合线fitted value 拟合值fixed model 固定模型fixed variable 固定变量fractional factorial design 部分析因设计frequency 频数F-test F检验full factorial design 完全析因设计function 函数Ggamma distribution 伽玛分布geometric mean 几何均值group 组Hharmomic mean 调和均值heterogeneity 不齐性histogram 直方图homogeneity 齐性homogeneity of variance 方差齐性hypothesis 假设hypothesis test 假设检验Iindependence 独立independent variable 自变量independent-samples 独立样本index 指数index of correlation 相关指数interaction 交互作用interclass correlation 组内相关interval estimate 区间估计intraclass correlation 组间相关inverse 倒数的iterate 迭代Kkernal 核Kolmogorov-Smirnov test柯尔莫哥洛夫-斯米诺夫检验kurtosis 峰度Llarge sample problem 大样本问题layer 层least-significant difference 最小显著差数least-square estimation 最小二乘估计least-square method 最小二乘法level 水平level of significance 显著性水平leverage value 中心化杠杆值life 寿命life test 寿命试验likelihood function 似然函数likelihood ratio test 似然比检验linear 线性的linear estimator 线性估计linear model 线性模型linear regression 线性回归linear relation 线性关系linear term 线性项logarithmic 对数的logarithms 对数logistic 逻辑的lost function 损失函数Mmain effect 主效应matrix 矩阵maximum 最大值maximum likelihood estimation 极大似然估计mean squared deviation(MSD) 均方差mean sum of square 均方和measure 衡量media 中位数M-estimator M估计minimum 最小值missing values 缺失值mixed model 混合模型mode 众数model 模型Monte Carle method 蒙特卡罗法moving average 移动平均值multicollinearity 多元共线性multiple comparison 多重比较multiple correlation 多重相关multiple correlation coefficient 复相关系数multiple correlation coefficient 多元相关系数multiple regression analysis 多元回归分析multiple regression equation 多元回归方程multiple response 多响应multivariate analysis 多元分析Nnegative relationship 负相关nonadditively 不可加性nonlinear 非线性nonlinear regression 非线性回归noparametric tests 非参数检验normal distribution 正态分布null hypothesis 零假设number of cases 个案数Oone-sample 单样本one-tailed test 单侧检验one-way ANOVA 单向方差分析one-way classification 单向分类optimal 优化的optimum allocation 最优配制order 排序order statistics 次序统计量origin 原点orthogonal 正交的outliers 异常值Ppaired observations 成对观测数据paired-sample 成对样本parameter 参数parameter estimation 参数估计partial correlation 偏相关partial correlation coefficient 偏相关系数partial regression coefficient 偏回归系数percent 百分数percentiles 百分位数pie chart 饼图point estimate 点估计poisson distribution 泊松分布polynomial curve 多项式曲线polynomial regression 多项式回归polynomials 多项式positive relationship 正相关power 幂P-P plot P-P概率图predict 预测predicted value 预测值prediction intervals 预测区间principal component analysis 主成分分析proability 概率probability density function 概率密度函数probit analysis 概率分析proportion 比例Qqadratic 二次的Q-Q plot Q-Q概率图quadratic term 二次项quality control 质量控制quantitative 数量的,度量的quartiles 四分位数Rrandom 随机的random number 随机数random number 随机数random sampling 随机取样random seed 随机数种子random variable 随机变量randomization 随机化range 极差rank 秩rank correlation 秩相关rank statistic 秩统计量regression analysis 回归分析regression coefficient 回归系数regression line 回归线reject 拒绝rejection region 拒绝域relationship 关系reliability 可靠性repeated 重复的report 报告,报表residual 残差residual sum of squares 剩余平方和response 响应risk function 风险函数robustness 稳健性root mean square 标准差row 行run 游程run test 游程检验Ssample 样本sample size 样本容量sample space 样本空间sampling 取样sampling inspection 抽样检验scatter chart 散点图S-curve S形曲线separately 单独地sets 集合sign test 符号检验significance 显著性significance level 显著性水平significance testing 显著性检验significant 显著的,有效的significant digits 有效数字skewed distribution 偏态分布skewness 偏度small sample problem 小样本问题smooth 平滑sort 排序soruces of variation 方差来源space 空间spread 扩展square 平方standard deviation 标准离差standard error of mean 均值的标准误差standardization 标准化standardize 标准化statistic 统计量statistical quality control 统计质量控制std. residual 标准残差stepwise regression analysis 逐步回归stimulus 刺激strong assumption 强假设stud. deleted residual 学生化剔除残差stud. residual 学生化残差subsamples 次级样本sufficient statistic 充分统计量sum 和sum of squares 平方和summary 概括,综述Ttable 表t-distribution t分布test 检验test criterion 检验判据test for linearity 线性检验test of goodness of fit 拟合优度检验test of homogeneity 齐性检验test of independence 独立性检验test rules 检验法则test statistics 检验统计量testing function 检验函数time series 时间序列tolerance limits 容许限total 总共,和transformation 转换treatment 处理trimmed mean 截尾均值true value 真值t-test t检验two-tailed test 双侧检验Uunbalanced 不平衡的unbiased estimation 无偏估计unbiasedness 无偏性uniform distribution 均匀分布Vvalue of estimator 估计值variable 变量variance 方差variance components 方差分量variance ratio 方差比various 不同的vector 向量Wweight 加权,权重weighted average 加权平均值within groups 组内的ZZ score Z分数。
生物统计学中的P值的意义
⽣物统计学中的P值的意义P-valueFrom Wikipedia, the free encyclopediaThe critical region for a right tail test with alpha set equal to 0.05 is shown.We have set alpha (α) = 0.05Criteria in Rejecting or Accepting null hypothesisIf P-value <(α)0.05 , we reject null hypothesis. If P-value >(α)0.05, we accept null hypothesis.Calculating a P-valueIf your hypothesis test is one-sided and the alternative hypothesis proposes that the population parameter is greater than the value in the null hypothesis, the P-value is the probability of assuming a value greater than the test statistic.If your hypothesis test is one-sided and the alternative hypothesis proposes that the population parameter is less than the value in the null hypothesis, the P-value is the probability of assuming a value less than the test statistic.If your hypothesis test is two-sided, the P-value is the probability of assuming a value more different to zero than the test statistic. That is, it is the probability of assuming a value greater than the positive value of test statistic and less than the negative value.The following diagram shows, in relation to probability distributions, the three different possible P-value calculations for the three different possible types of alternative hypothesis.In statistical significance testing, the p-value is the probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true. One often "rejects the null hypothesis" when the p-value is less than the significance levelα (Greek alpha), which is often 0.05 or 0.01. When the null hypothesis is rejected, the result is said to be statistically significant.A closely related concept is the E-value,[1] which is the average number of times in multiple testing that one expects to obtain a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true. The E-value is the product of the number of tests and the p-value.Although there is often confusion, the p-value is not the probability of the null hypothesis being true, nor is the p-value the same as the Type I errorα.[2]Contents[hide]1 Coin flipping example2 Interpretation3 Misunderstandings4 Problems5 See also6 References7 Further reading8 External links[edit] Coin flipping exampleMain article: Checking whether a coin is fairFor example, an experiment is performed to determine whether a coin flip is fair (50% chance, each, of landing heads or tails) or unfairly biased (> 50% chance of one of the outcomes).Suppose that the experimental results show the coin turning up heads 14 times out of 20 total flips. The p-value of this result would be the chance of a fair coin landing on heads at least 14 times out of 20 flips. The probability that 20 flips of a fair coin would result in 14 or more heads can be computed from binomial coefficients asThis probability is the (one-sided) p-value. It measures the chance that a fair coin would give a result at least this extreme. [edit] InterpretationIt has been suggested that this article or section be merged into Statistical hypothesis testing. (Discuss) Proposedsince September 2011.Traditionally, one rejects the null hypothesis if the p-value is smaller than or equal to the significance level,[3] often represented by the Greek letter α (alpha). (Greek α is also used for Type I error; the connection is that a hypothesis test that rejects the null hypothesis for all samples that have a p-value less than α will have a Type I error of α.) A significance level of 0.05 would deem as extraordinary any result that is within the most extreme 5% of all possible results under the null hypothesis. In this case a p-value less than 0.05 would result in the rejection of the null hypothesis at the 5% (significance) level.When we ask whether a given coin is fair, often we are interested in the deviation of our result from the equality of numbers of heads and tails. In this case, the deviation can be in either direction, favoring either heads or tails. Thus, in this example of 14 heads and 6 tails, we may want to calculate the probability of getting a result deviating by at least 4 from parity in either direction (two-sided test). This is the probability of getting at least 14 heads or at least 14 tails. As the binomial distribution is symmetrical for a fair coin, the two-sided p-value is simply twice the above calculated single-sided p-value; i.e., the two-sided p-value is 0.115.In the above example we thus have:null hypothesis (H0): fair coin; P(heads) = 0.5observation O: 14 heads out of 20 flips; andp-value of observation O given H0 = Prob(≥ 14 heads or ≥ 14 tails) = 0.115.The calculated p-value exceeds 0.05, so the observation is consistent with the null hypothesis — that the observed result of 14 heads out of 20 flips can be ascribed to chance alone — as it falls within the range of what would happen 95% of the time were the coin in fact fair. In our example, we fail to reject the null hypothesis at the 5% level. Although the coin did not fall evenly, the deviation from expected outcome is small enough to be consistent with chance.However, had one more head been obtained, the resulting p-value (two-tailed) would have been 0.0414 (4.14%). This time the null hypothesis – that the observed result of 15 heads out of 20 flips can be ascribed to chance alone – is rejected when using a 5% cut-off.To understand both the original purpose of the p-value p and the reasons p is so often misinterpreted, it helps to know that p constitutes the main result of statistical significance testing (not to be confused with hypothesis testing), popularized by Ronald A. Fisher. Fisher promoted this testing as a method of statistical inference. To call this testing inferential is misleading, however, since inference makes statements about general hypotheses based on observed data, such as the post-experimental probability a hypothesis is true. As explained above, p is instead a statement about data assuming the null hypothesis; consequently, indiscriminately considering p as an inferential result can lead to confusion, including many of themisinterpretations noted in the next section.On the other hand, Bayesian inference, the main alternative to significance testing, generates probabilistic statements about hypotheses based on data (and a priori estimates), and therefore truly constitutes inference. Bayesian methods can, for instance, calculate the probability that the null hypothesis H0 above is true assuming an a priori estimate of the probability that a coin is unfair. Since a priori we would be quite surprised that a coin could consistently give 75% heads, a Bayesian analysis would find the null hypothesis (that the coin is fair) quite probable even if a test gave 15 heads out of 20 tries (which as we saw above is considered a "significant" result at the 5% level according to its p-value).Strictly speaking, then, p is a statement about data rather than about any hypothesis, and hence it is not inferential. This raises the question, though, of how science has been able to advance using significance testing. The reason is that, in many situations, p approximates some useful post-experimental probabilities about hypotheses, such as the post-experimental probability of the null hypothesis. When this approximation holds, it could help a researcher to judge the post-experimental plausibility of a hypothesis.[4][5][6][7] Even so, this approximation does not eliminate the need for caution in interpreting p inferentially, as shown in the Jeffreys–Lindley paradox mentioned below.[edit] MisunderstandingsThe data obtained by comparing the p-value to a significance level will yield one of two results: either the null hypothesis is rejected, or the null hypothesis cannot be rejected at that significance level (which however does not imply that the null hypothesis is true). A small p-value that indicates statistical significance does not indicate that an alternative hypothesis is ipso facto correct.Despite the ubiquity of p-value tests, this particular test for statistical significance has come under heavy criticism due both to its inherent shortcomings and the potential for misinterpretation.There are several common misunderstandings about p-values.[8][9]1. The p-value is not the probability that the null hypothesis is true.In fact, frequentist statistics does not, and cannot, attach probabilities to hypotheses. Comparison of Bayesian and classical approaches shows that a p-value can be very close to zero while the posterior probability of the null is very close to unity (if there is no alternative hypothesis with a large enough a priori probability and which would explain the results more easily). This is the Jeffreys–Lindley paradox.2. The p-value is not the probability that a finding is "merely a fluke."As the calculation of a p-value is based on the assumption that a finding is the product of chance alone, it patently cannot also be used to gauge the probability of that assumption being true. This is different from the real meaning which is that the p-value is the chance of obtaining such results if the null hypothesis is true.3. The p-value is not the probability of falsely rejecting the null hypothesis. This error is a version of the so-calledprosecutor's fallacy.4. The p-value is not the probability that a replicating experiment would not yield the same conclusion.5. 1 − (p-value) is not the probability of the alternative hypothesis being true (see (1)).6. The significance level of the test is not determined by the p-value.The significance level of a test is a value that should be decided upon by the agent interpreting the data before the data are viewed, and is compared against the p-value or any other statistic calculated after the test has been performed.(However, reporting a p-value is more useful than simply saying that the results were or were not significant at a given level, and allows the reader to decide for himself whether to consider the results significant.)7. The p-value does not indicate the size or importance of the observed effect (compare with effect size). The two do varytogether however – the larger the effect, the smaller sample size will be required to get a significant p-value. [edit] ProblemsMain article: Statistical hypothesis testing#ControversyCritics of p-values point out that the criterion used to decide "statistical significance" is based on the somewhat arbitrary choice of level (often set at 0.05).[10] If significance testing is applied to hypotheses that are known to be false in advance, an insignificant result will simply reflect an insufficient sample size. Another problem is that the definition of "more extreme" data depends on the intentions of the investigator; for example, the situation in which the investigator flips the coin 100 times has a set of extreme data that is different from the situation in which the investigator continues to flip the coin until 50 heads are achieved.[11]As noted above, the p-value p is the main result of statistical significance testing. Fisher proposed p as an informal measure of evidence against the null hypothesis. He called researchers to combine p in the mind with other types of evidence for andagainst that hypothesis, such as the a priori plausibility of the hypothesis and the relative strengths of results from previous studies. Many misunderstandings concerning p arise because statistics classes and instructional materials ignore or at least do not emphasize the role of prior evidence in interpreting p. A renewed emphasis on prior evidence could encourage researchers to place p in the proper context, evaluating a hypothesis by weighing p together with all the other evidence about the hypothesis.[12]中药的统计学检验,是假设中药的疗效和对照组(安慰剂)的疗效是⼀样的,如果P值⼤于(α)0.05,检验的结果接受这个假设。
probability density
probability densityProbability density is a mathematical concept used in calculations of probability, especially in areas such as statistics, physics, and finance. It is defined as the probability of a given event occurring within a given range of values. Probability density functions are used to calculate the likelihood of an event occurring within a specified range of values.The concept of probability density is useful for understanding a wide variety of phenomena. It is often used to model the behavior of random variables and to measure the precision of predictions about an event’s probability in a given range of values.In order to calculate a probability density, the data from a set of experiments must be divided into groups which are then assigned a probability of occurrence. The probability of each group can then be multiplied by its corresponding probability density. The final result is a probability density curve which can be used to measure the likelihood of any result within a specified range of values.One example of the utility of probability density functions is in predicting the distribution of stock prices. The function can be used to predict the probabilities of different price ranges occurring across the stock market. This can help investors determine the most profitable investment strategiesfor their portfolios.In conclusion, probability density is an important mathematical tool which can be used to incorporate the effects of randomness into a range of disciplines, from physics to finance. It is a powerful way of understanding and predicting the outcome of a range of experiments, and is invaluable in helping investors determine the best strategies for their portfolios.。
hypotheses
hypothesesHypothesesIntroductionIn the world of scientific research, hypotheses play a critical role in the formulation of experiments and studies. A hypothesis is a statement or assumption that is made based on limited evidence or observations and serves as a starting point for further investigation. This document aims to explore the concept of hypotheses, their importance, and how they are formulated and tested in various scientific disciplines.What is a Hypothesis?A hypothesis is a proposed explanation or prediction for a phenomenon or a question that can be tested. It is an essential element of the scientific method and is used to guide research and experiments. Hypotheses are usually based on existing knowledge, previous observations, or theories, and serve as an attempt to explain or predict a particular phenomena.Formulating a HypothesisThe process of formulating a hypothesis requires careful consideration of the existing knowledge and evidence. To develop a hypothesis, researchers typically follow a few key steps:1. Identify the research question: The first step in formulatinga hypothesis is to clearly identify the research question or problem that needs to be addressed. This question should be specific and focused to provide a clear direction for the research.2. Review existing knowledge: Once the research question is identified, it is important to review the existing knowledge and literature related to the topic. This helps in understanding previous findings and theories that can inform the formulation of a hypothesis.3. Generate possible explanations: Based on the existing knowledge, researchers generate possible explanations or predictions for the research question. These explanations are known as hypotheses and should be testable and falsifiable.4. Refine the hypothesis: After generating the initial hypotheses, researchers refine and narrow down the options to develop a more focused and specific hypothesis. This is done by considering factors such as feasibility, relevance, and available resources.Testing a HypothesisOnce a hypothesis is formulated, it needs to be tested through experimentation or observation. The process of testing a hypothesis involves the following steps:1. Design the experiment: The researcher designs an experiment or study that will allow them to collect data and test the hypothesis. The design of the experiment should be carefully planned to ensure that it provides valid and reliable results.2. Collect and analyze data: During the experiment, data is collected and analyzed to determine whether the results support or refute the hypothesis. Statistical analysis is often used to evaluate the significance of the findings and to draw meaningful conclusions.3. Draw conclusions: Based on the analysis of the data, the researcher draws conclusions about the hypothesis. If the results support the hypothesis, it is considered to be validated. On the other hand, if the results contradict the hypothesis, it may be necessary to revise the hypothesis or develop new ones for further investigation.Importance of Hypotheses in Scientific ResearchHypotheses are a fundamental aspect of scientific research for several reasons:1. Guiding research: Hypotheses provide a clear direction for research, allowing researchers to focus their efforts on specific questions or problems. They help in organizing the research process and ensuring that it is purposeful and systematic.2. Promoting objectivity: Hypotheses help in maintaining objectivity in scientific research by providing a framework for testing and evaluating ideas. They prevent bias and ensure that the research is based on evidence and logic rather than personal opinions or beliefs.3. Advancing knowledge: By formulating hypotheses and testing them through rigorous experimentation, researchers contribute to the advancement of knowledge in their respective fields. Hypotheses that are supported by evidence can lead to new discoveries and insights.4. Identifying limitations: Hypotheses allow researchers to identify and address the limitations of existing knowledge and theories. They highlight the gaps in understanding and provide opportunities for further investigation and refinement of theories.ConclusionHypotheses are a critical component of scientific research. They provide a starting point for investigation, guide research efforts, and contribute to the advancement of knowledge. By formulating and testing hypotheses, researchers can better understand the world around us and make meaningful contributions to their respective fields.。
Hypothesis Testing
Hypothesis TestingHypothesis testing is a crucial concept in statistics that allows us to make inferences about a population based on a sample. It is a method used to determine whether there is enough evidence in a sample of data to infer that a certain condition is true for the entire population. This process involves formulating a hypothesis, collecting and analyzing data, and drawing conclusions based on the results. Hypothesis testing plays a significant role in various fields such as science, business, and healthcare, as it helps in making informed decisions and drawing valid conclusions.One of the key aspects of hypothesis testing is the formulation of the null and alternative hypotheses. The null hypothesis (H0) is a statement that there is no effect or no difference, while the alternative hypothesis (H1) is a statement that there is an effect or a difference. For example, in a clinical trial, the null hypothesis may be that a new drug has no effect, while the alternative hypothesis may be that the new drug is effective. These hypotheses are then tested using sample data to determine whether there is enough evidence to reject the null hypothesis in favor of the alternative hypothesis.The process of hypothesis testing involves several steps, including selecting an appropriate test statistic, determining the level of significance, collecting data, calculating the test statistic, and making a decision based on the test statistic and the level of significance. The test statistic is a numerical value calculated from the sample data, which is used to determine the likelihood of observing the sample result if the null hypothesis is true. The level of significance, denoted by α, is the probability of rejecting the null hypothesis when it is actually true. Typically, a significance level of 0.05 is used, which means that there is a 5% chance of rejecting the null hypothesis when it is true.One of the common misconceptions about hypothesis testing is the interpretation of the p-value. The p-value is the probability of observing a test statistic as extreme as, or more extreme than, the one calculated from the sample data, assuming that the null hypothesis is true. A smaller p-value indicates stronger evidence against the null hypothesis, leading to its rejection. However, it is important to note that the p-value is not the probability that the null hypothesis is true or false, but rather the probability of obtaining the observed result ifthe null hypothesis is true. Therefore, a small p-value does not prove that the alternative hypothesis is true, but rather suggests that there is enough evidence to reject the null hypothesis.In addition to the p-value, it is essential to consider the effect size and confidence interval when interpreting the results of hypothesis testing. The effect size measures the strength of the relationship between the variables, while the confidence interval provides a range of values within which the true population parameter is likely to fall. These measures help in understanding the practical significance of the results and provide a more comprehensive interpretation of the findings. It is crucial to consider these factors in conjunction with the p-value to make well-informed decisions based on the results of hypothesis testing.Hypothesis testing also has its limitations and challenges. One of the common challenges is determining the appropriate sample size to ensure the reliability and validity of the results. A small sample size may not provide enough power to detect a true effect, leading to inconclusive results. On the other hand, a large sample size may detect small, but practically unimportant, effects, leading to statistically significant results that are not practically significant. Therefore, it is essential to carefully consider the sample size to ensure the accuracy and relevance of the findings.Another limitation of hypothesis testing is the potential for Type I and Type II errors. A Type I error occurs when the null hypothesis is wrongly rejected, leading to the conclusion that there is an effect or a difference when there is none. On the other hand, a Type II error occurs when the null hypothesis is wrongly accepted, leading to the conclusion that there is no effect or difference when there actually is. These errors highlight the importance of considering the level of significance and the power of the test to minimize the likelihood of making incorrect conclusions based on the sample data.In conclusion, hypothesis testing is a fundamental concept in statistics that allows us to make inferences about a population based on sample data. It involves formulating null and alternative hypotheses, selecting an appropriate test statistic, determining the level ofsignificance, collecting and analyzing data, and interpreting the results. While hypothesis testing provides a valuable framework for making decisions and drawing conclusions, it is essential to consider its limitations and challenges, such as the interpretation of the p-value, the effect size, the confidence interval, sample size determination, and the potential for Type I and Type II errors. By addressing these factors and considering multiple perspectives, we can ensure that hypothesis testing is used effectively to make informed decisions and draw valid conclusions in various fields.。
《概率论与数理统计》基本名词中英文对照表
《概率论与数理统计》基本名词中英文对照表英文中文Probability theory 概率论mathematical statistics 数理统计deterministic phenomenon 确定性现象random phenomenon 随机现象sample space 样本空间random occurrence 随机事件fundamental event 基本事件certain event 必然事件impossible event 不可能事件random test 随机试验incompatible events 互不相容事件frequency 频率classical probabilistic model 古典概型geometric probability 几何概率conditional probability 条件概率multiplication theorem 乘法定理Bayes’s formula 贝叶斯公式Prior probability 先验概率Posterior probability 后验概率Independent events 相互独立事件Bernoulli trials 贝努利试验random variable 随机变量probability distribution 概率分布distribution function 分布函数discrete random variable 离散随机变量distribution law 分布律hypergeometric distribution 超几何分布random sampling model 随机抽样模型binomial distribution 二项分布Poisson distribution 泊松分布geometric distribution 几何分布probability density 概率密度continuous random variable 连续随机变量uniformly distribution 均匀分布exponential distribution 指数分布numerical character 数字特征mathematical expectation 数学期望variance 方差moment 矩central moment 中心矩n—dimensional random variable n—维随机变量two-dimensional random variable 二维离散随机变量joint probability distribution 联合概率分布joint distribution law 联合分布律joint distribution function 联合分布函数boundary distribution law 边缘分布律boundary distribution function 边缘分布函数exponential distribution 二维指数分布continuous random variable 二维连续随机变量joint probability density 联合概率密度boundary probability density 边缘概率密度conditional distribution 条件分布conditional distribution law 条件分布律conditional probability density 条件概率密度covariance 协方差dependency coefficient 相关系数normal distribution 正态分布limit theorem 极限定理standard normal distribution 标准正态分布logarithmic normal distribution 对数正态分布covariance matrix 协方差矩阵central limit theorem 中心极限定理Chebyshev’s inequality 切比雪夫不等式B ernoulli’s law of large numbers 贝努利大数定律statistics 统计量simple random sample 简单随机样本sample distribution function 样本分布函数sample mean 样本均值sample variance 样本方差sample standard deviation 样本标准差sample covariance 样本协方差sample correlation coefficient 样本相关系数order statistics 顺序统计量sample median 样本中位数sample fractiles 样本极差sampling distribution 抽样分布parameter estimation 参数估计estimator 估计量estimate value 估计值unbiased estimator 无偏估计unbiassedness 无偏性biased error 偏差mean square error 均方误差relative efficient 相对有效性minimum variance 最小方差asymptotic unbiased estimator 渐近无偏估计量uniformly estimator 一致性估计量moment method of estimation 矩法估计maximum likelihood method of estimation 极大似然估计法likelihood function 似然函数maximum likelihood estimator 极大似然估计值interval estimation 区间估计hypothesis testing 假设检验statistical hypothesis 统计假设simple hypothesis 简单假设composite hypothesis 复合假设rejection region 拒绝域acceptance domain 接受域test statistics 检验统计量linear regression analysis 线性回归分析。
机载脉冲多普勒雷达在测量数据丢失下的多目标跟踪
机载脉冲多普勒雷达在测量数据丢失下的多目标跟踪何山;吴盘龙;恽鹏;邓宇浩【摘要】针对于多目标在机载多普勒盲区测量数据丢失下的跟踪问题,提出了一种鲁棒无偏转换自适应门限的CPHD (Robust Unbiased Converted Measurements-Adaptive Gating-Cardinalized Probability Hypothesis Density,RUCM-AG-CPHD)算法.该算法首先对目标测量信息进行无偏转换,并将无偏转换得到的噪声协方差矩阵做解耦;然后设计增益调节矩阵提高滤波器在目标量测数据丢失下的鲁棒性;最后采用自适应门限去除不相关的量测信息,同时保证检测到新出现的目标,从而有效地降低了算法的计算复杂度.仿真结果表明该算法的有效性和可行性,可以更加准确的估计出目标在盲区内测量信息丢失下的目标个数和状态,且计算量相对于传统的CPHD算法减少了8.6%.%The algorithm of robust unbiased converted measurements and adaptive gating based on cardinalized probability hypothesis density (RUCM-AG-CPHD) is proposed for multiple targets tracking problem under the Doppler blind zone with missing measurements.Firstly,the target measurements are unbiasedly converted,and the noise covariance matrix is decoupled.Then the gain adjustment matrix is designed to improve the robustness of filter in the loss of target measurements.Finally,the adaptive gating is used to remove the irrelevant measurements,and the detection of new targets is guaranteed,so as to effectively reduce the computational complexity of the algorithm.Simulation results demonstrate the validness and feasibility of the proposed algorithm.The number and state of the targets under the loss of measurements in the blind zone can be estimated moreaccurately,and the calculation amount of RUCM-AG-CPHD is reduced by 8.6% compared with traditional CPHD algorithm.【期刊名称】《中国惯性技术学报》【年(卷),期】2017(025)005【总页数】6页(P630-635)【关键词】CPHD滤波器;多目标跟踪;多普勒盲区;无偏转换;自适应门限【作者】何山;吴盘龙;恽鹏;邓宇浩【作者单位】南京理工大学自动化学院,南京210094;南京理工大学自动化学院,南京210094;多运动体信息感知与协同控制重点实验室,南京210094;南京理工大学自动化学院,南京210094;南京理工大学自动化学院,南京210094【正文语种】中文【中图分类】TP24雷达与目标之间存在相对运动时,回波信号和发射信号的频率不相等从而产生多普勒效应,而机载脉冲多普勒雷达正是利用这种多普勒效应进行目标信息的提取[1],使其具有脉冲雷达的距离分辨力和连续波雷达的速度分辨力,对杂波有较强的抑制能力。
Hypothesis_Testing(统计学假设检验)
2. Next, we obtain a random sample from the population. For example,
批注本地保存成功开通会员云端永久保存去开通
Statistics for Business (ENV)
Chapter 9
INTRODUCTION TO HYPOTHESIS TESTING
1
Hypothesis Testing
9.1
9.2 9.3
Null and Alternative Hypotheses and Errors in Testing z Tests about a Population with known s t Tests about a Population with unknown s
2
Hypothesis testing-1
Researchers usually collect data from a sample and then use the sample data to help answer questions about the population. Hypothesis testing is an inferential statistical process that uses limited information from the sample data as to reach a general conclusion about the population.
概率论与数理统计公式(Probability theory and mathematical statistics formula)
概率论与数理统计公式(Probability theory and mathematicalstatistics formula)The first chapter stochastic events and their probability(1) permutations and combinations formulas are used to pick out the possible number of permutations of n individuals from m individuals.The number of possible combinations of n individuals selected from m individuals. (2) addition and multiplication principle addition principle (two methods can complete the matter): m+nThe two method can be used to complete a certain subject. The first method can be completed by M methods, and the second method can be completed by n methods. Then, this method can be completed by m+n methods.Multiplication principle (two steps can not do this separately): m x nThe first step can be completed by M methods, and the second step can be completed by n methods, and this can be accomplished by m * n methods in two ways.(3) some common permutations are repetitive and non repetitive (ordered)Opposite events (at least one)Order problem(4) random test and random events if a test is repeated in the same conditions, and each time the test results may be more than one, but before a test is not to assert that it appears which results, said the study was a randomized trial.The possible outcome of the experiment is called a random event.(5) basic events, sample spaces and events in a test, regardless of the number of events, can always find such a group of events, it has the following properties:Each trial must occur and only one event in this group occurs;Any event is made up of some of the events in this group.Each event in such a group of events is called a basic event, which is used to represent the event.The whole of the basic event is called the sample space of the test.An event is a collection of parts (basic events) in it. Capital letters A, B, C are usually used,... Representing events, they are subsets.Is it an inevitable event, an impossible one?.The probability of an impossible event is zero, and the event with zero probability is not necessarily an impossible event; similarly, the probability of the inevitable event (omega) is1, and the event with probability 1 is not necessarily an inevitable event.(6) the relationship between events and operations:If the component of the event A is also a part of the event B, (A happens, there must be an event B):If there is a simultaneous event, the event A is equivalent to the event B, or A equals B:A=B.There is at least one event in A and B: A B, or A+B.An event that is part of A rather than B is called the difference between A and B, denoted as A-B, and can also be denoted as A-AB, or it represents the event that B does not happen when A occurs.A andB occur simultaneously: A, B, or AB. A B=?, which means that A and B cannot happen at the same time, called event A incompatible with event B or mutually exclusive. Basic events are incompatible.-A is called the inverse event of event A, or the opposite event of A. It represents an event that does not occur in A. Mutual exclusion is not necessarily opposite.Operations:Binding rate: A (BC) = (AB) C, A (B, C) = (A, B), CThe distribution rate (AB), C= (A, C) a (B, C) (A, B) C= (AC),(BC)The rate of probability: (7) the axiomatic definition of probability is set as a sample space. For events, there is a real number P (A) for each event, if the following three conditions are satisfied:1 ~ 0 = P (A = 1),2 degree P (omega) =13 degrees for 22 incompatible events,,... YesIt is often called countable (complete) additivity.P (A) is called the probability of events.(8) the classical probability model is 1 degrees,2 degree.Set any event, it is made up of, there isP (A) = =(9) geometric probability if the random test results for infinite uncountable and each results the possibility of uniform, and every basic event in the sample space can be used to describe a bounded region, said the test for random geometric probability. A for any event,. L is geometric measure (length, area, volume). (10) additive formula P (A+B) =P (A) +P (B) -P (AB)When P (AB) = 0, P (A+B) =P (A) +P (B)(11) subtraction formula P (A-B) =P (A) -P (AB)When B A, P (A-B) =P (A) -P (B)When A=, P () =1- P (B) (12) conditional probability defines A and B are two events, and P (A) >0 is called the conditional probability of event B occurring in event A.Conditional probability is a kind of probability, and all probability properties are suitable for conditional probability.For example, P (omega /B) =1 P (/A) =1-P (B/A) (13) multiplication formula multiplication formula:More generally, for event A1, A2,... An, if P (A1A2... An-1) >0, but there is… …… … 。
计量经济学中英文词汇对照
Controlled experiments Conventional depth Convolution Corrected factor Corrected mean Correction coefficient Correctness Correlation coefficient Correlation index Correspondence Counting Counts Covaห้องสมุดไป่ตู้iance Covariant Cox Regression Criteria for fitting Criteria of least squares Critical ratio Critical region Critical value
Asymmetric distribution Asymptotic bias Asymptotic efficiency Asymptotic variance Attributable risk Attribute data Attribution Autocorrelation Autocorrelation of residuals Average Average confidence interval length Average growth rate BBB Bar chart Bar graph Base period Bayes' theorem Bell-shaped curve Bernoulli distribution Best-trim estimator Bias Binary logistic regression Binomial distribution Bisquare Bivariate Correlate Bivariate normal distribution Bivariate normal population Biweight interval Biweight M-estimator Block BMDP(Biomedical computer programs) Boxplots Breakdown bound CCC Canonical correlation Caption Case-control study Categorical variable Catenary Cauchy distribution Cause-and-effect relationship Cell Censoring
PValue - 用于判定拟合效果的重要指标
What is a P-value?I have found that many students are unsure about the interpretation of P-values and other concepts related to tests of significance. These ideas are used repeatedly in various applications so it is important that they be understood. I will explain the concepts in general terms first, then their application in the problem of assessing normality.We wish to test a null hypothesis against an alternative hypothesis using a dataset. The two hypotheses specify two statistical models for the process that produced the data. The alternative hypothesis is what we expect to be true if the null hypothesis is false. We cannot prove that the alternative hypothesis is true but we may be able to demonstrate that the alternative is much more plausible than the null hypothesis given the data. This demonstration is usually expressed in terms of a probability (a P-value) quantifying the strength of the evidence against the null hypothesis in favor of the alternative.We ask whether the data appear to be consistent with the null hypothesis or whether it is unlikely that we would obtain data of this kind if the null hypothesis were true, assuming that at least one of the two hypotheses is true. We address this question by calculating the value of a test statistic, i.e., a particular real-valued function of the data. To decide whether the value of the test statistic is consistent with the null hypothesis, we need to know what sampling variability to expect in our test statistic if the null hypothesis is true. In other words, we need to know the null distribution, the distribution of the test statistic when the null hypothesis is true. In many applications, the test statistic is defined so that its null distribution is a “named” distribution for which tables are widely accessible; e.g., the standard normal distribution, the Binomial distribution with n = 100 and p = 1/2, the t distribution with 4 degrees of freedom, the chi-square distribution with 23 degrees of freedom, the F distribution with 2 and 20 degrees of freedom.Now, given the value of the test statistic (a number), and the null distribution of the test statistic (a theoretical distribution usually represented by a probability density), we want to see whether the test statistic is in the middle of the distribution (consistent with the null hypothesis) or out in a tail of the distribution (making the alternative hypothesis seem more plausible). Sometimes we will want to consider the right-hand tail, sometimes the left-hand tail, and sometimes both tails, depending on how the test statistic and alternative hypothesis are defined. Suppose that large positive values of the test statistic seem more plausible under the alternative hypothesis than under the null hypothesis. Then we want a measure of how far out our test statistic is in the right-hand tail of the null distribution. The P-value provides a measure of this distance. The P-value (in this situation) is the probability to the right of our test statistic calculated using the null distribution. The further out the test statistic is in the tail, the smaller the P-value, and the stronger the evidence against the null hypothesis in favor of the alternative.The P-value can be interpreted in terms of a hypothetical repetition of the study. Suppose the null hypothesis is true and a new dataset is obtained independently of the first dataset but using the same sampling procedure. If the new dataset is used to calculate a new value of the test statistic (same formula but new data), what is the probability that the new value will be further out in the tail (assuming a one-tailed test) than the original value? This probability is the P-value.The P-value is often incorrectly interpreted as the probability that the null hypothesis is true. Try not to make this mistake. In a frequentist interpretation of probability, there is nothing random about whether the hypothesis is true, the randomness is in the process generating the data. One can interpret “the probability that the null hypothesis is true” using subjective probability, a measure of one’s belief that the null hypothesis is true. One canthen calculate this subjective probability by specifying a prior probability (subjective belief before looking at the data) that the null hypothesis is true, and then use the data and the model to update one’s subjective probability. This is called the Bayesian approach because Bayes’ Theorem is used to update subjective probabilities to reflect new information. When reporting a P-value to persons unfamiliar with statistics, it is often necessary to use descriptive language to indicate the strength of the evidence. I tend to use the following sort of language. Obviously the cut-offs are somewhat arbitrary and another person might use different language.P > 0.10No evidence against the null hypothesis. The data appear to beconsistent with the null hypothesis.0.05 < P < 0.10Weak evidence against the null hypothesis in favor of the alternative.0.01 < P < 0.05Moderate evidence against the null hypothesis in favor of thealternative.0.001 < P < 0.01Strong evidence against the null hypothesis in favor of thealternative.P < 0.001Very strong evidence against the null hypothesis in favor of thealternative.In using this kind of language, one should keep in mind the difference between statistical significance and practical significance. In a large study one may obtain a small P-value even though the magnitude of the effect being tested is too small to be of importance (see the discussion of power below). It is a good idea to support a P-value with a confidence interval for the parameter being tested.A P-value can also be reported more formally in terms of a fixed level α test. Here α is a number selected independently of the data, usually 0.05 or 0.01, more rarely 0.10. We reject the null hypothesis at level α if the P-value is smaller than α, otherwise we fail to reject the null hypothesis at level α. I am not fond of this kind of language because it suggests a more definite, clear-cut answer than is often available. There is essentially no difference between a P-value of 0.051 and 0.049. In some situations it may be necessaryto proceed with some course of action based on our belief in whether the null or alternative hypothesis is true. More often, it seems better to report the P-value as a measure of evidence.A fixed level α test can be calculated without first calculating a P-value. This is done by comparing the test statistic with a critical value of the null distribution corresponding to the level α. This is usually the easiest approach when doing hand calculations and using statistical tables, which provide percentiles for a relatively small set of probabilities. Most statistical software produces P-values which can be compared directly with α. There is no need to repeat the calculation by hand.Fixed level α tests are needed for discussing the power of a test, a useful concept when planning a study. Suppose we are comparing a new medical treatment with a standard treatment, the control. The null hypothesis is that of no treatment effect (no difference between treatment and control). The alternative hypothesis is that the treatment effect (mean difference of treatment minus control using some outcome variable) is positive. We want to have good chance of reporting a small P-value assuming the alternative hypothesisis true and the magnitude of the effect is large enough to be of practical importance. The power of a level α test is defined to be the probability that the null hypothesis will berejected at level α (i.e., the P-value will be less than α) assuming the alternative hypothesis is true. The power generally depends on the variability of the data (lower variance, higher power), the sample size (higher n, higher power), and the magnitude of the effect (larger effect, higher power).Assessing normality using the Ryan-Joiner test.Null hypothesis: the data {x 1, ..., x n } are a random sample of size n from a normal distribution.Alternative hypothesis: the data are a random sample from some other distribution.Test statistic: r = the correlation between the data and the normal scores.The normal scores are defined by the following graph.i{rank(x i Rationale: If the data are a sample from a normal distribution then the normal probability plot (plot of normal scores against the data) will be close to a straight line, and the correlation r will be close to 1. If the data are sampled from a non-normal distribution then the plot may show a marked deviation from a straight line, resulting in a smaller correlation r . Smaller values of r are therefore regarded as stronger evidence against the null hypothesis.Null distribution of r : I do not know whether this distribution has a name. We might call it the Ryan-Joiner distribution, corresponding to the name of the test. The density will be skewed to the left, with most of the probability close to 1, as in the picture below.P-value: The probability to the left of the observed correlation r calculated using the null distribution; i.e., the area under the density to the left of r. You do not need to know how to calculate this. Minitab does the calculation for you.Interpretation: If you want to use simple descriptive language, you can use the table above.The strength of evidence is described directly in terms of the P-value.r1。
HypothesisTesting假设检验讲义
Should the sample be random?
We make decisions about the population based on the sample
总体和样本
样品: 总体中具有共同特征 的子集。可以计算其形成的 统计表(X).
为何要选取样本?
总体: 统计总体 用以定义所有可知或不可知参数(m, 的数据或信息
A Statistical Hypothesis
An assertion or conjecture about one or more parameters of the population To determine whether it is true or false, we must examine the entire population. This is impossible!! Instead use a random sample to provide evidence that either supports or does not support the hypothesis. The conclusion is then based upon statistical significance. It is important to remember that this conclusion is an inference about the population determined from the sample data.
2. Once we have identified these factors and made adjustments for improvement, we need to validate actual improvements in our processes.
elementary statistics 10th 解答
elementary statistics 10th 解答引言概述:Elementary Statistics 10th Edition is a comprehensive textbook that provides an introduction to the fundamental concepts and techniques of statistics. In this article, we will delve into the various aspects covered in the book, highlighting the key points and explanations provided.正文内容:1. Descriptive Statistics1.1 Measures of Central Tendency: The book explains the concept of central tendency, including mean, median, and mode. It further elaborates on their calculations and interpretations.1.2 Measures of Dispersion: This section covers the measures of dispersion, such as range, variance, and standard deviation. The book provides detailed explanations on how to calculate and interpret these measures.2. Probability2.1 Basic Probability Concepts: The book introduces the fundamental concepts of probability, including sample spaces, events, and probability rules. It explains the calculation of probabilities using both theoretical and empirical approaches.2.2 Conditional Probability: This section focuses on conditional probability and introduces concepts like independence and dependence of events. The book provides examples and explanations to help readers understand these concepts.2.3 Probability Distributions: The book covers various probability distributions, including discrete and continuous distributions. It explains their characteristics, probability density functions, and cumulative distribution functions.3. Sampling and Sampling Distributions3.1 Simple Random Sampling: The book explains the process of simple random sampling, including its advantages and disadvantages. It also discusses methods for selecting random samples.3.2 Sampling Distributions: This section covers the concept of sampling distributions, including the central limit theorem. The book provides examples and explanations to help readers understand the concept of sampling distributions.4. Estimation and Hypothesis Testing4.1 Point Estimation: The book explains the concept of point estimation, including methods for estimating population parameters. It covers topics such as the method of moments and maximum likelihood estimation.4.2 Confidence Intervals: This section focuses on confidence intervals and their interpretation. The book provides step-by-step procedures for constructing confidence intervals for population parameters.4.3 Hypothesis Testing: The book covers hypothesis testing, including null and alternative hypotheses, type I and type II errors, and p-values. It provides examples and explanations to help readers understand the process of hypothesis testing.5. Regression and Correlation5.1 Simple Linear Regression: The book introduces simple linear regression and explains the concept of the least squares method. It covers topics such as regression equations, coefficient of determination, and hypothesis testing in regression.5.2 Correlation: This section focuses on correlation analysis and explains the calculation and interpretation of correlation coefficients. The book also covers topics such as the coefficient of determination and the significance of correlation.总结:In conclusion, Elementary Statistics 10th Edition provides a comprehensive introduction to statistics, covering various topics such as descriptive statistics, probability,sampling, estimation, hypothesis testing, and regression analysis. The book offers detailed explanations, examples, and step-by-step procedures to help readers understand and apply statistical concepts. Whether you are a student or a professional, this textbook serves as a valuable resource for learning and mastering elementary statistics.。
论文开头英语模板作文
论文开头英语模板作文英文回答:Introduction。
In the realm of scientific inquiry, the hypothesis serves as a crucial cornerstone upon which the edifice of knowledge is built. A well-crafted hypothesis provides a roadmap for research, guiding the formulation of experiments, the interpretation of data, and the ultimate elucidation of scientific phenomena. In this essay, we will delve into the concept of hypothesis testing, exploring its methodology, statistical underpinnings, and its profound implications for the advancement of human understanding.Definition of Hypothesis Testing。
Hypothesis testing is a statistical procedure that enables researchers to make inferences about a population based on a sample. It involves formulating a hypothesis,collecting data, and using statistical analysis to determine whether the data provides evidence to support or reject the hypothesis.Methodology of Hypothesis Testing。
hypothesis-testing paper
hypothesis-testing paperHypothesis testing is the process of making a choice between two conflicting hypotheses. The null hypothesis, H0, is a statistical proposition stating that there is no significant difference between a hypothesized value of a population parameter and its value estimated from a sample drawn from that population. The alternative hypothesis, H1 or Ha, is a statistical proposition stating that there is a significant difference between a hypothesized value of a population parameter and its estimated value. When the null hypothesis is tested, a decision is either correct or incorrect. An incorrect decision can be made in two ways: We can reject the null hypothesis when it is true (Type I error) or we can fail toreject the null hypothesis when it is false (Type II error). The probability of making Type I and Type II errors is designated by alpha and beta, respectively. The smallest observed significance level for which the null hypothesis would be rejected is referred to as the p-value. The p-value only has meaning as a measure of confidence when the decision is to reject the null hypothesis. It has no meaning when the decision isthat the null hypothesis is true.。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Probability Hypothesis Density-Based Multitarget Tracking With Bistatic Range andDoppler ObservationsMartin Tobias and Aaron ntermanSchool of Electrical and Computer EngineeringGeorgia Institute of TechnologyAtlanta,Georgia30332–0250U.S.A.E-mail:mtobias@,lanterma@AbstractRonald Mahler’s Probability Hypothesis Density(PHD)provides a promising framework for the passive coherent location of targets observed via multiple bistatic radar measurements.Weapply a particlefilter implementation of the Bayesian PHDfilter to target tracking using both rangeand Doppler measurements from a simple non-directional receiver that exploits non-co¨o perativeFM radio transmitters as its“illuminators of opportunity”.Signal-to-noise ratios,probabilities ofdetection and false alarm and bistatic range and Doppler variances are incorporated into a realistictwo-target scenario.Bistatic range cells are used in calculating the birth particle proposal density.The tracking results are compared to those obtained when the same tracker is used with range-onlymeasurements.This is done for two different probabilities of false alarm.The PHD particlefilterhandles ghost targets well and has improved tracking performance when incorporating Dopplermeasurements along with the range measurements.This improved tracking performance,however,comes at the cost of requiring more particles and additional computation.I.I NTRODUCTIONA.The PHD and Passive RadarA particlefilter implementation of a multitarget tracker,based on Mahler’s Probability Hypothesis Density(PHD)[1]–[3],wasfirst applied to passive radar in a rudimentary fashion in[4].Having shown promising results,the implementation was expanded to incorporate arealistic passive radar configuration in[5].However,only range measurements were consid-ered.This paper presents an improved version of the PHD-based particlefilter as applied to passive coherent location,and we incorporate Doppler measurements into the PHD-based particlefilter,thus effecting a range and velocity multitarget tracker.We compare its tracking performance to that of the range-only tracker when used in a realistic scenario.The remainder of this section provides a brief review of multitarget,multisensor tracking, followed by a review offinite-set statistics(FISST),which is used to derive the PHD-based multitarget Bayesianfilter.In Section II,the concept of passive coherent location1is reviewed. Section III describes the simulation configuration,while Section IV presents the PHD particle filter implementation.A review of the radar parameters used is contained in Section V,and the results of the simulation are in Section VI.A summary of conclusions is found in Section VII.B.Review of Multitarget,Multisensor TrackingThe theory of single-sensor,single-target tracking is rather well understood.The workhorse of such systems is the ubiquitous extended Kalmanfilter,along with its Interacting Multiple Model(IMM)extension.The newer unscented Kalmanfilters[6]and fully nonlinear,non-Gaussian algorithms,such as particlefilters[7]–[9],are becoming popular as well[10]. When multiple targets are present,2however,the situation becomes rapidly more complex. It is not known which reports from a given sensor are created by which targets.The com-plexity increases when multiple sensors are used,and things become even more problematic in the presence of false alarms and missed detections.1)Association-Based Multitarget Tracking:Most mainstream tracking algorithms have historically been based on the idea that there is some true report-to-track association that must be mon techniques may involve“soft”report-to-track assignments,as found in Joint Probabilistic Data Association(JPDA)[13],or“hard”assignments,as performed in Multiple Frame Assignment(MFA)using Lagrangian relaxation techniques[12],[14]–[19].1To the best of our knowledge,the term“passive coherent location”was coined by Dick Lodwig of Lockheed Martin (then,IBM)and his colleagues.2When multiple targets arefirst mentioned in a paper,authors usually cite Blackman’s classic book[11].Instead,we recommend the more recent,vastly expanded,and incredibly thorough(over1200pages)tome by Blackman and Popoli [12].2)Multitarget Tracking without Explicit Associations:Some alternatives to the association-based multitarget,multisensor tracking algorithms are slowly gaining attention[10].In these alternative approaches,no explicit association between tracks and targets are made.Propo-nents of such techniques contend that estimated associations,like those of hard and soft report-to-track,are both unnecessary and potentially misleading.One novel approach that avoids explicit associations is the Symmetric Measurement Equation[20]–[23]method developed by Kamen and colleagues in the early1990’s.Another approach,which is the focus of this paper,is based onfinite-set statistics(FISST).C.Finite-Set StatisticsMathematically speaking,a real-valued random variable is a function that maps elements of an underlying probability space into the space of real numbers.In most engineering applications,one can forget about this fundamental definition and deal directly with concepts related to the random variable,such as probability density functions,cumulative distribution functions,moments,entropy,etc.Extending the notion to random vectors,such as the state vectors in single-target tracking systems,is straightforward.Further extension to random “processes”,which map elements of an underlying probability space to the space of functions defined on a continuum,is much more complex.However,thefinal results are readily employed by engineers.In an extensive series of conference papers(particularly the SPIE sessions on Signal and Data Processing of Small Targets and Signal Processing,Sensor Fusion,and Target Recognition),Mahler has proposed“random sets”,which map elements of an underlying probability space to sets,as the most natural framework to address data fusion in general and target tracking systems in particular.Unfortunately,the probability theory associated with random sets is nowhere near as well-known as the theory associated with the more mundane random variables and vectors.However,the presentation of the theory in the book by Goodman,Mahler,and Nguyen[24]is thorough.The authors define“set integrals”and“set derivatives”in terms of generalised Radon-Nikod´y m derivatives.These set generalisations of calculus allow for generalisations of probability densities and distribution functions.Once the random sets are given a solid measure-theoretic foundation,ideas from statistics,such as maximum-likelihood estimation and maximum a-posteriori estimation,and from information theory,such as entropy and Kullback-Leibler distances,can be extended to these random sets (see Sections5.2and5.3of[24]).Mahler[25],[26]attempts to distill random set theory to nuts-and-bolts principles that practicing engineers can easily apply.One of the more elegant aspects of traditional Kalman filtering is the way in which the prior and posterior distributions are characterised by a small set of sufficient statistics that are easily propagated in the Kalman recursion.When target tracking is generalised to the multitarget,multisensor scenario,however,no simple analogous implementation seems to appear.Nevertheless,attempting to replicate the simplicity of the Kalmanfilter for the multitarget, multisensor case,Mahler and Zajic[27]propose propagating thefirst moment of a function that maps a set of targets into a continuous function space.This functional mapping is essential since the expectation of a set-valued random variable is not well-defined.They choose a function that places Dirac deltas at the target positions and call itsfirst moment a Probability Hypothesis Density(PHD).The PHD acts much like an intensity of a Poisson point process;in fact,it is thefirst factorial moment density found in point process theory [2],[28].Like the mean and variance of the Kalmanfilter,the PHD is readily propagated forward through the Bayesian prediction and data update steps.It is,of course,a bit more complicated since an entire function is being propagated forward,not just a mean vector and a covariance matrix.II.P ASSIVE C OHERENT L OCATION(PCL)A.RangeConsider a bistatic radar consisting of a passive receiver and an independent transmitting antenna.If the direct path signal is measured along with the reflected path signal,then correlation processing yields the following range measurement observation:R= (x−x t)2+(y−y t)2+ (x−x r)2+(y−y r)2(1) where(x r,y r)and(x t,y t)are the locations of the antennas,and(x,y)is the location of the target.Thus,a target can be located along an ellipse,where the receiver and transmitter are located at the foci of the ellipse.It is difficult to build highly directional receiver antennas that operate at the low frequencies of interest in a passive radar system that exploits FM broadcasts.Hence,rather than exploit angle-of-arrival information to resolve a target’s location,multiple transmitter-receiver pairs are employed instead.The target can thus be located at the intersection of the resulting bistatic range ellipses.This is not to imply that angle information,if available in a PCL system,isof no value;we simply wish to explore the limits of what can be achieved without it.Future work will study the effect of including angle information.A problem that arises,however,is that of ghost targets.A ghost target appears at the intersection of bistatic range ellipses where no target is present.This is due to the nature of the ellipse geometry and confuses multitarget trackers,which must process ghost targets until they disappear.Noisy measurements exacerbate the problem.The PHD-based particlefilter, however,is seen to adequately handle ghost targets with no additional conceptual effort,i.e., no explicit ghost-busting3logic is needed.B.VelocityBy observing the Doppler shift caused by the target in the received signal’s frequency,a bistatic radar also provides the rate of change of the range measurement given in(1):˙R=(x−x r)˙x+(y−y r)˙y(x−x t)2+(y−y t)2(2)(x−x r)2+(y−y r)2+(x−x t)˙x+(y−y t)˙yWith˙R measurements from multiple transmitter-receiver pairs,a target’s velocity components (˙x,˙y)can be found.III.S CENARIO C ONFIGURATIONOur demonstration scenario is the same as that in[5].The Field of View(FoV)consists of an80km×80km stretch of the Washington D.C.area.The receiving antenna is in the middle of the FoV and is assumed to be located on one of the Lockheed Martin Mission Systems buildings.A receiver such as Lockheed Martin’s Silent Sentry system,except with a simpler antenna,is assumed.The illuminators of opportunity consist of three non-co¨o perative FM transmitters.The transmitter specifications are given in Table I,and their locations can be seen in Figure1(a).The receiver co¨o rdinates and system specifications are listed in Table II. All antennas are assumed to be omni-directional,and thus they have unity gain.The noise figure listed,which is meant to account for external interference sources as well as internal receiver noise,is assumed to be a valid approximation for an urban environment such as Washington D.C.[29].3See the following website for a description of a ghost-busting technique:/nmd/businessweek.htmlTABLE IT RANSMITTING ANTENNA SPECIFICATIONS.Call Letters Latitude Longitude Frequency(f)Power(P T)Bandwidth(β)W AMU38.936◦N77.093◦W88.5MHz50.0kW45kHzWETA38.892◦N77.132◦W90.9MHz75.0kW45kHzWPGC38.864◦N76.911◦W95.5MHz50.0kW45kHzTABLE IIR ECEIVER SYSTEM SPECIFICATIONS.Latitude39.153◦NLongitude77.215◦WCoherent Processing Interval(CP I)0.5secReference Temperature(T0)290KNoise Figure(NF)30dBGain(G R)0dBIV.T HE PHD-BASED P ARTICLE F ILTERRonald Mahler introduced the concept of a Probability Hypothesis Density(PHD),which is defined as being any function that,when integrated over any given area,specifies the expected number of targets present in that area.More specifically,the PHD is the factorial moment density found in point process theory[2],[28],and it provides a straightforward method of estimating the number of targets in a region under observation.We thus expect the PHD to be a useful tool for tracking multiple targets,especially in handling the many ghost targets that arise from noisy bistatic radar ing probability generating functionals and set calculus,Mahler derives Bayesian time-update and data-update equations that use the PHD,respectively,to perform motion prediction and incorporate sensor observations [2],[3],[27],[30],[31].This allows the multitarget tracker to incorporate both range and Doppler observation information,which we expect to produce better tracking results than using range-only information.We use the particlefilter implementation of the update equations[32],whereby the PHD is represented by a collection of particles and their corresponding weights.At time-step k,each particle in thefilter is a vector of the formξi=[x i y i˙x i˙y i]T and has a weight w i,k, where(x i,y i)specify the particle’s location and(˙x i,˙y i)specify its velocity components.As per the defining property of the PHD,˜N=E[no.of targets]=[N]nearest integer(3)k|kwhereN k|k= i w i,k(4) A.InitialisationThe simulation begins by independently and randomly assigning the particles’x and y components to fall within the FoV.The˙x and˙y components are independently and randomly chosen to be between a minimum of−495km/h and a maximum of495km/h(i.e.,−137.5 m/s to137.5m/s),where North and East are positive.The particle weights are initialised to zero,since we do not expect any targets to be present at time k=0.B.Time UpdateThe time-update step of the particlefilter involves multiplying each particle vector by a simple constant-velocity transition matrix and adding Gaussian process noise.This propagates the particles forward in time,thus modelling the target motion,where each time step k of simulation represents one second of time.To model the PHD of new targets that appear in the FoV,birth particles are added to the simulation during the time-update step.They indicate where new targets are likely to appear at the current time step.To economise on the number of particles needed in the simulation and to achieve better target tracking results,we propose a targeted cluster placement of birth particles to be used wherever a bistatic range ellipse intersects with the edge of the FoV.At the location of the intersection,a cluster of birth particles is centred and spread independently in both x and y according to a normal distribution with a standard deviation equal to a bistatic range cell(see Section V-E and Table III).When no bistatic range ellipses intersect the FoV boundaries,the birth particles are randomly placed uniformly in a9km-wide band around the inside edge of the FoV.In both placement methods,the velocity components of the particles are initialised inde-pendently and randomly with uniform probability over all possible velocities,as given in Section IV-A.However,if a particle is placed in the right-hand quadrant of the FoV,then itsinitial˙x component is restricted to negative values.If it is placed in the left-hand quadrant, then the˙x component is initialised to positive values only.A similar restriction is enforced on the initial˙y component of a particle that is placed in either the top or bottom quadrants of the FoV.The simulation assumes that targets will not spontaneously disappear and that they will not spawn new targets.Any particles,whose x and y components place them outside of the FoV,have their location components adjusted,so that they are repositioned in a mirror-image fashion across the nearest FoV edge back into the FoV.This keeps all of the particles inside the region of interest.We now weight the particles according to the method described in[33]for particlefilter representations of the PHD.Since we simply use the prior target-motion model to propagate the particles from the previous time step,these propagated particles maintain the same weights as they had at the end of the previous time step.The birth particles,when the uniform placement method is used,are given equal weighting.When the targeted cluster placement is used,however,the birth particles are given weights˜w birthi,k+1=1J k+1·Q x·q x(x i,k+1|z k+1)·Q y·q y(y i,k+1|z k+1)(5)where J k+1is the number of birth particles used and q x(x i,k+1|z k+1)and q y(y i,k+1|z k+1)are normal density functions with means equal,respectively,to the x and y positions of the ellipse intersection at the FoV edge and with standard deviations equal to the intersection’s corresponding bistatic range cell.Q x is set to2,if the intersection occurs along the left or right edge of the FoV,and is set to1,otherwise.Similarly,if the intersection occurs along the top or bottom edges,then Q y is set to2;otherwise,it is set to1.This takes into account the doubling of particle density due to the folding-in of particles found outside the FoV.In both birth particle placement methods,we normalise the birth particle weights,such that ˜w birth i,k+1equals the expected number of new targets per scan.Since we assume that onlyone target might enter the FoV at each time step,we set this term equal to one.However, one could choose a higher or lower value if an alternative birth model is desired.This step is not explicitly mentioned in[33],but we found this normalisation necessary to have the particlefilter accurately represent the PHD.The results of the time-update step are the propagated and birth particles and their asso-ciated weights,indicated by˜w i,k+1,which represent the predicted PHD for time-step k+1.C.Data UpdateIn the data-update step,the time-predicted˜w i,k+1are converted to thefinal PHD particle weights,w i,k+1,by incorporating the radar range and Doppler observations at time k+ 1.Given a single sensor with the set of observations Z s={z1,...,z m}made at timek+1,probability of detection p D(ξ),single-target likelihood function f(z|ξ)and Poisson-distributed false alarms with parameterλand density c(z),the data-updated weights arecomputed byw i,k+1=(mn=1u i,n)+˜w i,k+1(1−p D(ξ))(6)whereu i,n=p D(ξi)f(z n|ξi)˜w i,k+1λc(z n)+ N j=1p D(ξj)f(z n|ξj)˜w j,k+1(7)for i=1,...,N,where N is the total number of particles.The set of observations Z s contains both range and Doppler measurements.Thus,either f R(z n|ξi)or f˙R(z n|ξi)must be used as the single-target likelihood function f(z n|ξi),depend-ing on whether z n is a range or a Doppler observation,respectively.The computations of p D(ξ),f(z|ξ),λand c(z)are given in Section V.In the bistatic radar case,each receiver and transmitter pair constitutes a“sensor”.In our example,there are three sensors in the configuration,and three sets of range and Doppler observations are collected at each time step,namely{Z1,Z2,Z3}.Following a procedure suggested by Mahler[34]to determine thefinal weights for this multisensor case,(6)and (7)arefirst applied to Z1.The resulting w i,k+1are then used as the˜w i,k+1to reiterate(6) and(7)over Z2.The latter procedure is repeated for Z3tofind thefinal multisensor particle weights.The order in which the observation sets are processed does affect thefinal result; although,practically,it has little effect.This issue remains available for further investigation. Having generated thefinal particle weights,w i,k+1,the expected number of targets in the FoV is computed via(3).The locations of the˜N expected targets are found by extracting the ˜N highest peaks from the PHD represented by these weights.We currently use an expectation-maximisation algorithm for this extraction.D.Peak ExtractionTofind the target locations and their velocities,the˜N highest peaks must be extracted from the PHD.Tofind these peaks,we assume that the PHD in the neighbourhood of the peakscan be approximated by Gaussian distributions,so we attempt tofit a mixture of Gaussians to the PHD using an expectation-maximisation(EM)algorithm[35],which we modify to account for the particle weights.Thus,the algorithm tofindθg=(αg,µg,Σg),which are the weight,mean and covariance parameters of the g-th Gaussian distribution in the mix,is given by the following iteration:Expectation:P(g|x i)=p(x i|θg)αgG j=1p(x i|θj)αj(8)Maximisation:αnew g =Ni=1w i P(g|x i)(9)µnew g =1αnewgNi=1w i P(g|x i)x iΣnew g =1αnewgNi=1w i P(g|x i)(x i−µnew g)(x i−µnew g)Twhere p(x i|θj)is a normal density function with meanµj and covariance matrixΣj,G is the number of Gaussians in the mixture and G≥˜N,where˜N is the number of targets estimated by integrating the PHD via(3).Theµg are initialised by randomly choosing G particles and selecting their components to be the values for theµg.To obtain good results from the EM algorithm[36],short runs of the algorithm are performed,and the run that produces the highest likelihood is then used for a longer EM run.The result is thefinal estimate of the G-Gaussian mixture.When iterating the EM algorithm,a run is terminated upon achieving a given threshold or if a covariance matrix becomes singular.The preceding is performed multiple times for different values of G,and a minimum description length(MDL)criterion is then used to select the bestfitting Gaussian mixture by maximising the penalised likelihood[37]:L(x;θG)−ρ2ln N(10)whereρ=(G−1)+G(d+d(d+1)/2)and d is the particle dimensionality(in the current case,d=4),and whereL(x;θG)=Nw i N i=1w i ln G j=1αj p(x i|θj)(11)The means of the˜N highest-weighted Gaussians in the bestfitting mixture are then taken to be the expected locations and velocities of the targets.A benefit of using the EM algorithm is that it produces covariance matrices that provide one with a measure of uncertainty in the location and velocity estimates.E.ResamplingBefore iterating the particlefilter over the next time step,the particles are resampled via a Monte Carlo method to obtain an initial number(i.e.,the amount before birth particles were added)of equally weighted particles wherei w i= i w i,k+1(12)V.B ISTATIC R ADAR V ARIABLESA.Signal-to-Noise ratio,SNRTo compute f(z|ξ)and p D(ξ),it isfirst necessary to compute each sensor’s signal-to-noise ratio for each particle.The SNR of a particle is calculated as follows[38]–[40]:SNR(ξi)=KR2TR2R(13)where R T and R R are the distances between the particle’s(x,y)location and the sensor’s transmitting and receiving antennas,respectively,andK=P T G T G Rλ2fσrcs F2T F2R(4π)3k T0(1CP I)(NF)(14)whereλf=cf,c is the speed of light,and f is the frequency of the FM signal given in Table I. The transmitter power P T is also taken from Table I,and the transmitter gain G T is assumed to be unity.The receiver gain G R,reference temperature T0,coherent processing interval CP I and noisefigure NF are taken from Table II.Boltzmann’s constant is represented by k,and F T and F R are the signal propagation factors.For this study,it is assumed that signal propagation gains and losses are negligible;including such effects is planned for future work. The target’s bistatic radar cross section is denoted byσrcs.All targets in the simulation are assumed to haveσrcs=10dBsm.B.Probability of Detection,p DThe calculation of the bistatic radar’s probability of detection is based on its SNR and the probability of false alarm,p F A.At low frequencies,a target may reasonably be assumed to be slowlyfluctuating;hence,a Rician target model is employed.Thus,[41]p D(ξ)=Q 2SNR(ξ), 2ln 1p F A (15) where Q is the Marcum Q-function,SNR(ξ)is given by(13),and p F A is set to afixed value. For afixed p F A,a gain in SNR corresponds to an increase in p D.In the simulation,the p F A is initially set to10−4.This achieves a p D=0.9999with an SNR=14.94dB,and a p D=0.1when SNR=6.19dB[41].For reasonable simulation, p D is restricted to a maximum value of0.99999.Note that the p D(ξ)in(6)does not depend on any specific radar observation,since the (1−p D(ξ))term deals with potential missed targets.Thus,aσrcs must be chosen that one would expect a potential missed target to have were the radar to detect it.For illustration, we escape this vexing chicken-and-egg situation by choosingσrcs=10dBsm,since this is the value assumed in generating the simulated data.C.Single-Target Likelihood,f(z|ξ)1)Range Likelihood,f R(z|ξ):The single-target range likelihood function of each bistatic radar antenna pair determines how close each particle’s(x,y)values are to the observed target location,given that the radar observes the range measurement given by(1).Each par-ticle’s corresponding bistatic range measurement is computed(Rξi),as well as the difference between it and the observed range.f R(z i|ξi)is a normal density function with mean Rξi and varianceσ2r,whereσ2r is the variance of the bistatic range:σ2r=σ2t·c2(16)and[42]σ2t=12β2SNR(ξi)(17)whereβis the transmitter bandwidth specified in Table I,and SNR(ξi)is given by(13).2)Doppler Likelihood,f˙R(z|ξ):The single-target Doppler likelihood function of eachbistatic radar pair determines how close the˙R value of each particle,given by substituting the particle’s components into(2),is to the observed˙R measurement.Each particle’s corre-sponding rate of bistatic range change is computed(˙Rξi),as well as the difference between it and the observed˙R.f˙R(z i|ξi)is a normal density function with mean˙Rξi and varianceσ2˙r,whereσ2˙r is the variance of the rate of change in bistatic range:σ2˙r=σ2f·λ2f(18) whereλf is the frequency of the transmitted signal,and[39],[40]σ2f=max 32SNR·(π·CP I)2,1CP I2 (19)On the right-hand side of(19),thefirst term in the max function is the accuracy with which the bistatic radar is able to measure the received signal.The second term is the resolution obtained from the passive radar’s use of the discrete Fourier transform to compute the Doppler shift of the signal.Thus,the variance in the˙R measurement is the worse,i.e.greater,of the two terms.D.False Alarm Parameters,λand c(z)The false alarm Poisson-distribution parameterλis computed based on the number of range and Doppler cells present in the simulation.These,in turn,are based on the extent of range and Doppler in the scenario,as well as the bistatic range and Doppler resolutions of the radar.Each transmitter-receiver pair’sλparameter is calculated in the following manner:λ=(total no.cells)×p F A(20) wheretotal no.cells=(no.range cells)×(no.Doppler cells)(21) andno.range cells=range extentbistatic range resolution(22)no.Doppler cells=Doppler extentDoppler resolution(23)andbistatic range resolution =c β(24)Doppler resolution =1CP I (25)and range extent =1.5× (80km )2+(80km )2(26)Doppler extent =2 2V max λf (27)where V max is the maximum possible target velocity.The range extent value of (26)is for the hypothetical case where the receiver is located at the centre of the FoV ,the transmitter is located in a corner of the FoV ,and the target is located at the opposite corner.The Doppler extent found in (27)takes into account both positive and negative velocities.Thus,both extents are chosen to be as large as theoretically possible in our scenario.The false alarms are assumed to be uniformly distributed over the range and Doppler extents,and thus the spatial distribution parameter is determined in the following manner:c (z )=1(range extent )×(Doppler extent )(28)E.Bistatic Range CellsTo place the birth particles correctly in the targeted clustering method described in Section IV-B,the size of the bistatic range cell at the cluster’s location must be computed.A bistatic range cell is the resolution at which a bistatic radar can pinpoint a target’s location.It is approximated by [38]:∆R B ≈cτ2cos(ψ2)(29)where τis the compressed pulse width and ψis the bistatic angle:ψ=cos −1R 2T +R 2R −L 22R R R T (30)where L is the distance between the transmitter and the receiver.Thus,4∆R B ≈cτ√R R R T (R T +R R )2−L 2(31)=c √R R R T β (R T +R R )2−L 24This is a correction to the derivation found in [5].。