Chapter 2(stochastic process)
金融随机分析 II Stochastic process
n
i
bn 0
P 1 (ii) bn E X | F ni i1 0, and 2 n
(iii ) b
EX
n i 1
2 ni
E E X ni | Fi 1
2
0
12
Ref: Hall . P, Heyde C., Martingale limit theory and its applicatiom.1980, Academic Press. Inc
M k M a.s and
M
k
M dP 0 as k
10
Corollary 2.2.9 Let X L P ,N k k 1 be an
1
increasing family of σ-algebras, N k F and define N to be the σ-algebra generated by
h X k 1 k h X k 1 X k (b) for every bounded Borel-measurable function h :
(c) (d)
euX k1 k euX k1 X k , u
(Agreement of Laplace transforms)
2.3 Markov Processes
Definition 2.3.1 Let (, , ) be a filtration under . Let {Xk,k=0,1,…}be a stochastic process on (, , ) . This process is said to be Markov if:
StochasticProcess
2
Discrete Time and Continuous Time Processes
•X(t) is a discrete time process if X(t) is defined only for a set of time instants tn=nT where T is a constant and n is an integer
According to Theorem 1 p( j)= [ P(0, 1)P(1,2) … P( j-1, j)]T p(0) In the Stationary case, P(0, 1)P(1,2) … P( j-1, j) = P j
Example: Random Walk With Barriers
Markov Diagram
•In the stationary form
–the Chapman-Kolmogorov equations has a graphical interpretation in terms of one-step probabilities p1,1 p1 p2 p2,2 p2,1 2 1 p1,2 p2,3 p3,2 p1,3 p3,1 3
t ,t ' pm P[ X (t ' ) n | X (t ) m] ,n
•For discrete-time process, define
i, j pm P[ X ( j ) n | X (i ) m] ,n
Markov Chain
•A stochastic process is a Markov chain if
•X(t) is a discrete value process if the set of all possible values of X(t) at all times t is a countable set SX
随机过程讲义(第二章)(PDF)
第二章 随机过程的一般概念2.1 随机过程的基本概念和例子定义2.1.1:设(P ,,F )Ω为概率空间,T 是某参数集,若对每一个,是该概率空间上的随机变量,则称为随机过程(Stochastic Process)。
T t ∈),(w t X ),w t (X 随机过程就是定义在同一概率空间上的一族随机变量。
随机过程可以看成定义在),(w t X Ω×T 上的二元函数,固定Ω∈0w ,即对于一个特定的随机试验,称为样本路径(Sample Path),或实现(realization),这是通常所观测到的过程;另一方面,固定,是一个随机变量,按某个概率分布随机取值。
),(0w t X T t ∈0),(0w t X抽象一点:令,即∏∈=Tt T R R T R 中的元素为),(T t x X t t ∈=,为其Borel域(插乘)(T R B σ域),随机过程实质上是()F ,Ω到())(,T T R R B 上的一个可测映射,在())(,T TR RB 上诱导出一个概率测度:T P ()B X P B P R B T T T ∈=∈∀)(),(B 。
一般代表的是时间。
根据参数集T 的性质,随机过程可以分为两大类: t 1)为可数集,如T {}L ,2,1,0=T 或{}L L ,1,0,1,−=T ,称为离散参数随机过程,也称为随机序列;2)为不可数集,如T {}0≥=t t T 或{}∞<<∞−=t t T ,称为连续参数随机过程。
随机过程的取值称为过程所处的状态(State),所有状态的全体称为状态空间(State Space)。
通常以表示随机过程的状态空间。
根据状态空间的特征,一般把随机过程分为两大类:T t t X ∈),(S 1) 离散状态,即取一些离散的值; )(t X 2)连续状态,即的取值范围是连续的。
)(t X离散参数离散状态随机过程: Markov 链 连续参数离散状态随机过程: Poisson 过程 离散参数连续状态随机过程: *Markov 序列连续参数连续状态随机过程: Gauss 过程,Brown 运动例2.1.1:一醉汉在路上行走,以的概率向前迈一步,以q 的概率向后迈一步,以p r 的概率在原地不动,1=++r q p ,选定某个初始时刻,若以记它在时刻的位置,则就是直线上的随机游动(Random Walk)。
随机过程--Chapter 2
customer in (0,t] is t-Si. Adding the revenues generated by all
arrivals in (0,t]
N (t)
N (t )
(t Si ) ,
i 1
E (t Si )
i1
14
2.2 Properties of Poisson processes
Solution:
(a) E[S10]=10/= 10 days
(b) P{X11>2} = e -2 = e-2 0.1333
9
2.2 Properties of Poisson processes
Arrival time distribution
Proposition 2.2 :
The arrival time of the nth event Sn follows a Γ distribution
f (t) e t
each interarrival time {Xn, n1} follows an exponential
distribution with parameter .
8
2.2 Properties of Poisson processes
Example 1 Suppose that people immigrate into a territory at a Poisson
and X1 and X2 are independent
7
2.2 Properties of Poisson processes
Similarly, we obtain: P{ Xn>tXn-1=s} = e-t P{ Xn tXn-1= s} = 1- e-t
流程密码第二章如何把制度转化为流程读后感
流程密码第二章如何把制度转化为流程读后感(中英文实用版)英文文档:After reading Chapter 2 of "Process Password - How to Transform Systems into Processes", I gained a deeper understanding of the importance of converting organizational systems into efficient processes.This chapter highlighted the significance of aligning daily operations with the overall strategic goals of a company.The chapter emphasized the need for a clear and well-defined process framework that outlines the steps and guidelines for completing tasks.It discussed the benefits of having a standardized process, such as increased productivity, reduced errors, and improved customer satisfaction.One key concept that stood out to me was the importance of process documentation.The chapter highlighted that documenting processes not only helps in ensuring consistency but also serves as a valuable resource for training new employees and maintaining organizational knowledge.Furthermore, the chapter introduced the concept of process mapping, which I found to be a useful tool for visualizing and understanding the flow of activities within a process.By mapping out the steps and decision points, organizations can identify bottlenecks, inefficiencies, and areas for improvement.In conclusion, Chapter 2 of "Process Password" provided valuable insights on how to transform organizational systems into effective processes.It highlighted the importance of process documentation, standardization, and process mapping in achieving operational excellence.This chapter has motivated me to further explore and implement these concepts in my own work to improve efficiency and effectiveness.中文文档:阅读了《流程密码- 如何把制度转化为流程》的第二章后,我深刻体会到了将组织制度转化为高效流程的重要性。
Stochastic Process - Introduction
γ (s ) = cov( X t , X t + s ) = E (( X t − µ )( X s − µ ))
• The correlation between Xt and Xt+s is
ρ (s ) =
cov( X t , X s ) =
var( X t ) var( X s )
γ (s ) γ (0 )
FX t
1
,..., X t k
(x1 ,..., xk ) = P(X t
1
≤ x1 ,..., X tk ≤ x k
)
for any t1 ,..., t k ∈ I and any real numbers x1, …, xk . • The distribution function tells us everything we need to know about the process {Xt }.
week 2 4
Stationary Processes
• A process is said to be strictly stationary if (X t ,..., X t joint distribution as (X t + ∆ ,..., X t + ∆ ) . That is, if
week 2 5
Weak Stationarity
• Strict stationarity is too strong of a condition in practice. It is often difficult assumption to assess based on an observed time series x1,…,xk. • In time series analysis we often use a weaker sense of stationarity in terms of the moments of the process. • A process is said to be nth-order weakly stationary if all its joint moments up to order n exists and are time invariant, i.e., independent of time origin. • For example, a second-order weakly stationary process will have constant mean and variance, with the covariance and the correlation being functions of the time difference along. • A strictly stationary process with the first two moments finite is also a second-ordered weakly stationary. But a strictly stationary process may not have finite moments and therefore may not be weakly stationary.
机械工程学专业词汇英语翻译(S)2
specimen 试样 speckle 散斑 speckle holography 散斑全息照相术 speckle interferometry 散斑⼲涉法 spectral condition 谱条件 spectral curve 谱曲线 spectral density 谱密度 spectral displacement 谱位移 spectral distribution 谱分布 spectral frequency 谱频率 spectral function 谱函数 spectral line 谱线 spectral method 谱⽅法 spectral position 谱位置 spectral shift 谱位移 spectral theory 谱理论 spectrum 谱 spectrum matrix 谱矩阵 spectrum tensor 谱张量 speed 速率 speed change 变速 speed control 速率第 speed error 速率误差 speed governor 蒂机 speed indicator 速度计 speed limiting device 限速器 speed of autorotation ⾃转速度 speed of heat propagation 热传播速率 speed of light 光速 speed of perception 感觉速率 speed of response 响应速度 speed of sound 声速 speed per hour 时速 speed range 变速范围 speed reduction 减速 speed regulation 速度第 speed regulator 蒂机 speedometer 转速表 sphere 球 sphere of reflection 反射球 sphere shaped 球形的 spherical angle 球⾯⾓ spherical bearing 球⾯⽀承 spherical coordinates 球⾯坐标 spherical function 球⾯函数 spherical joint 球形接头 spherical motion 球⾯运动 spherical pendulum 球摆 spherical shell 球壳 spherical shock wave 球⾯激波 spherical space 对映空间 spherical strain tensor 球⾯应变张量 spherical stress tensor 球⾯应⼒张量 spherical surface 球⾯ spherical tensor 球⾯张量 spherical top 球形陀螺 spherical wave 球⾯波 spheroidal wave function 球波函数 spillway 溢淋 spillway dam 溢劣 spin angle ⾃旋⾓ spin angular momentum ⾃旋⾓动量 spin axis ⾃旋轴 spin coordinate ⾃旋坐标 spin eigenfunction ⾃旋本寨数 spin eigenstate ⾃旋本宅 spin interaction ⾃旋相互酌 spin orbit potential ⾃旋轨道势 spin relaxation ⾃旋弛豫 spin resonance ⾃旋共振 spin resonance frequency ⾃旋共振频率 spin slip spectrum ⾃旋反转谱 spin tensor ⾃旋张量 spin tensor operator ⾃旋张量算符 spin wave ⾃旋波 spin wave theory ⾃旋波理论 spindle 轴 spinning 尾旋 spinning detonation 旋焰爆炸 spinning obstacle 旋转障碍 spinning top 旋转陀螺 spinning wind tunnel 螺旋风洞 spinor field 旋量场 spinor gravitation 旋量引⼒ spinor wave 旋量波 spinor wave equation 旋量波⽅程 spinor wave function 旋量波函数 spiral 螺线 spiral dislocation 螺旋形位错 spiral fiber structure 螺旋形纤维结构 spiral flow 螺旋流 spiral of archimedes 阿基⽶德螺线 spiral orbit 螺线轨道 spiral propeller 螺旋推进器 spiral spring 螺旋形弹簧 spiral trajectory 螺线轨道 spiral vortex 螺旋形涡流 spiral vortex model 螺旋涡模型 spire 尖塔 spirit level ⽔准器 split hopkinson bar 分离式霍普⾦森杆 split pattern 分离模式 splitting 分裂 spoiler 绕菱 sponge 海绵 spongy 海绵状的 spontaneous combustion ⾃燃 spontaneous crack propagation ⾃发裂缝传播 spontaneous emission ⾃发发射 spout 瘤⼝ spouted bed 喷出床 spouting spring 喷泉 spray 喷雾 spray pump 喷淋泵 spray resistance 喷雾阻⼒ spraying effect 雾化效应 spraying nozzle 喷雾嘴 spring 弹簧 spring back 弹性回复 spring balance 弹簧秤 spring buffer 弹簧绶冲器 spring constant 弹簧常数 spring dynamometer 弹簧秤 spring force 弹簧弹⼒ spring hammer 弹簧锤 spring manometer 弹簧式压⼒计 spring model 弹簧模型 spring oscillator 弹簧振⼦ spring pressure 弹簧压⼒ spring pressure gage 弹簧式压⼒计 spring ring 弹簧圈 spring scales 弹簧秤 spring tension 弹簧张⼒ spurious frequency 寄⽣频率 spurious oscillation 寄⽣振荡 spurious scattering 虚散射 squagging ⾃锁 square measure ⾯积单位 square wave ⽅形波 square wave oscillation 矩形波振荡 square wave pulse 矩形脉冲 squashing 压碎 squeezing 压榨 stability 稳定性 stability condition 稳定性条件 stability constant 稳定性常数 stability curve 稳定性曲线 stability layer 稳定层 stability limit 稳定性极限 stability line 稳定性曲线 stability of equilibrium 平衡稳定性 stability of motions 运动稳定性 stability of parallel flow 平⾏寥定性 stability of stratified flow 分层寥定性 stability of structures 结构稳定性 stability of vibration 振动稳定性 stability tensor 稳定性张量 stability theorem 稳定性定理 stabilization 稳定化 stabilized equilibrium 稳定平衡 stabilizer 稳定器 stabilizing force 稳定⼒ stabilizing gyroscope 稳定陀螺 stable axis 稳定轴 stable equilibrium 稳定平衡 stable equilibrium position 稳定平衡位置 stable motion 稳定运动 stable orbit 稳定轨道 stable state 稳定状态 stable system 稳定系 stable wave 稳定波 stage 阶段 stage discharge curve ⽔位量曲线 stage efficiency 分级效率 staged rocket 多级⽕箭 staggered arrangement 交错配置 staggered riveting 交错铆接 staggering 摇摆 stagnant water 静⽔ stagnation 停滞 stagnation curve 静⽔曲线 stagnation density 滞⽌密度 stagnation point 驻点 stagnation point flow 驻点流 stagnation pressure 驻点压⼒ stagnation temperature 滞⽌温度 stake 杆 stalled airfoil 失速机翼 stalled condition 失速条件 stalled flow 失速流 stalled wing 失速机翼 stalling 失速 stalling angle 失速迎⾓ stalling characteristics 失速特性 stalling flight 失速飞⾏ stalling point 失速点 stalling speed 失速速率 stand by power 备⽤功率 standard atmosphere 标准⼤⽓ standard condition 标准条件 standard density 标准密度 standard deviation 标准偏差 standard equation 正规⽅程 standard frequency 标准频率 standard frequency spectrum 标准频谱 standard isobaric surfaces 标准等压⾯ standard linear solid 标准线形固体 standard load 正常负载 standard measure 标准衡器 standard orifice plate 标准孔板 standard pressure 标准压⼒ standard resistance 标准阻⼒ standard solid 标准固体 standard state 标准状态 standard test specimen 标准试样 standard water column 标准⽔柱 standard wave 标准波 standard wind 标准风 standing wave 驻波 star connection y 形接法 starboard 右舷 starling hypothesis 斯塔林假说 start up time 起动时间 starting air 起动空⽓ starting lever 起动杆 starting of oscillations 振荡起动 starting oscillation 起动振动 starting period 起动时间 starting power 起动功率 starting pulse 起动脉冲 starting resistance 起动阻⼒ starting torque 起动转矩 state continuity 状态连续性 state curve 状态曲线 state equation 物态⽅程 state feedback control 状态反馈控制 state function 态函数 state of aggregation 聚集状态 state of energy 能量状态 state of plane stress 平⾯应⼒状态 state of rest 静态 state of strain 应变状态 state of stress 应⼒状态 state of suspension 悬浮状态 state parameter 状态参数 state space 状态空间 state vector 态⽮量 static 静的 static accuracy 静态准确度 static balance 静态平衡 static balancing 静⼒平衡 static balancing machine 静平衡试验机 static constraint reaction 静反⼒ static deflection 静载挠度 static elasticity 静弹性 static equilibrium 静态平衡 static fatigue 静疲劳 static field 静场 static force 静⼒ static fracture 静态破裂 static friction 静摩擦 static head 静压头 static hysteresis curve 静态滞后曲线 static imbalance 静⼒不平衡 static indeterminateness 超静定性 static instability 静⼒不稳定性 static lift 静升⼒ static load 静负载 static magnetic field 静磁场 static modulus of elasticity 静弹性模量 static moment 静⼒矩 static moment of the surface 静⼒表⾯矩 static pressure 静压 static pressure tube 静压管 static reaction 静反酌 static rolling friction 静滚动摩擦 static slip 静滑移 static stability 静⼒稳定性 static strength 静⼒强度 static stress 静应⼒ static surface tension 静表⾯张⼒ static temperature 静温 static test 静⼒试验 static unbalance 静⼒不平衡 statical condition 静⼒条件 statically defined 静定的 statically determinate 静定的 statically determinate beam 静定梁 statically determinate reaction 静定反⼒ statically determinate structure 静定结构 statically determinate system 静定系 statically indeterminate beam 静不定梁 statically indeterminate structure 静不定结构 statically indeterminate system 静不定系 statics 静⼒学 stationary axle 固定轴 stationary blade 固定叶⽚ stationary creep 定常蠕变 stationary distribution 定常分布 stationary equilibrium 定常平衡 stationary field 恒定场 stationary flow 定常流 stationary motion 定常运动 stationary orbit 静⽌轨道 stationary point 平稳点 stationary potential 稳定势 stationary random process 平稳随机过程 stationary satellite 静⽌卫星 stationary state 平稳态 stationary vibration 平稳振动 stationary vortex 定常涡旋 stationary wave 驻波 statistic test 统计检验 statistical accuracy 统计精度 statistical analysis 统计分析 statistical fluctuation 统计涨落 statistical mechanics 统计⼒学 statistical noise 统计噪声 statistical theory of turbulence 湍脸计理论 statistical thermodynamics 统计热⼒学 statistical weight 统计重量 statistics 统计学 staude cone 斯陶德锥⾯ steady detonation 定常爆轰 steady flow 定常流 steady free surface flow 定常⽆压流 steady gas flow 定常⽓流 steady load 不变负荷 steady motion 定常运动 steady potential 稳定势 steady precession 稳定旋进 steady rest 稳定架 steady state 平稳态 steady state creep 定常蠕变 steady state magnetic field 稳定磁场 steady state oscillation 稳态振荡 steady vorticity 定常涡流 steam 蒸汽 steam bleeding 抽汽 steam drive 蒸汽传动 steam ejector 蒸汽喷射器 steam extraction 抽汽 steam hammer 蒸汽锤 steam injector 蒸汽喷射器 steam jet 蒸汽喷流 steam meter 蒸汽量计 steam nozzle 蒸汽喷嘴 steam power 汽⼒ steam pressure 蒸汽压 steep throw 陡抛 steepness of wave edge 波前陡度 steering angle 转向⾓ steering column 转向柱 steiner theorem 平⾏轴定理 stellar dynamics 恒星动⼒学 stellar energy 恒星能量 stellar guidance 天⽂导航 stellar structure 恒星结构 step disturbance 阶跃扰动 step rocket 多级⽕箭 step velocity 阶跃速度 stepped shaft 梯级式轴 stepwise loading 逐步加载 stereographic net 伍尔夫经纬圈 stereographic projection 球⾯投影 stereophotogrammetry ⽴体照相测量术 steric acceleration 空间加速度 stiff chain 刚链 stiff joint 刚节点 stiffener 加劲材 stiffening 加劲 stiffening plate 加强板 stiffening rib 加强肋 stiffening ring 加强环 stiffness 刚度 stiffness coefficient 刚度系数 stiffness matrix 刚度矩阵 stiffness method 刚度法 stiffness modulus 劲度模量 stiffness reactance 劲度⼒抗 stiffness rotor 刚性转⼦ stiffness term 刚度项 stochastic acceleration 随机加速度 stochastic force 随机⼒ stochastic hydraulics 随机⽔⼒学 stochastic process 随机过程 stochastic similarity 随机相似 stochastic simulation 随机模拟 stockmayer potential 史托克梅耶势 stokes approximation 斯托克斯近似 stokes flow 斯托克斯怜 stokes fluid 斯托克斯铃 stokes law 斯托克斯定律 stokes stream function 斯托克斯怜数 stokes theorem 斯托克斯定理 stokes wave 斯托克斯波 stoma ⼩孔 stone stream 岩⽯流 stop 停⽌ stopping distance 停⽌距离 storage 存储 storage modulus 存储模量 stored energy 储能 stored energy function 储能函数 storm surge 风暴潮 storm wave 风暴波 straight angle 平⾓ straight line motion 直线运动 straight line plot of the bending stress 弯曲应⼒直线图 straight line wedge 直线楔 straightening 矫正 straightness 平直度 strain 应变 strain ageing 应变时效 strain amplitude 应变幅度 strain anisotropy 应变蛤异性 strain anneal method 应变退⽕法 strain component 应变分量 strain crack 应变裂缝 strain deviator 形变偏量 strain ellipsoid 应变椭球 strain energy 应变能 strain energy density 应变能密度 strain energy function 应变能函数 strain energy method 应变能法 strain energy theory 应变能理论 strain fatigue 应变疲劳 strain field 应变场 strain free lattice ⽆应变点阵 strain gage 应变计 strain gradient 应变梯度 strain hardening 应变硬化 strain hardening capacity 应变硬化能⼒ strain hardening coefficient 应变硬化系数 strain hardening curve 应变硬化曲线 strain hardening index 应变硬化指数 strain history 应变历程 strain intensity 应变强度 strain invariant 应变不变量 strain matrix 应变矩阵 strain measure 应变量度 strain measurement 应变测定 strain potential 应变势 strain rate 应变率 strain rate effect 应变率效应 strain rate history 应变率历程 strain relaxation 应变弛豫 strain space 应变空间 strain tensor 应变张量 strain theory 应变理论 strain work 形变功 strainer 式过滤器 stranded wire 绞线 strap 带 stratification 分层 stratification coefficient 分层系数 stratification of atmosphere ⼤⽓分层 stratification of water mass ⽔团分层 stratification of wind 风分层 stratiform structure 层状结构 stratosphere 平零 stream 怜 stream bed 河床 stream cross section center 怜截⾯中⼼ stream flow 河流 stream function 怜数 stream sheet 怜层 stream surface 伶 stream tube 淋 streamer 闪流等离⼦柳 streaming 怜 streaming around 绕流 streaming potential 怜位势 streamline 吝 streamline analogy 吝相似 streamline field 吝场 streamline flow ⽚流层流 streamli n e f o r m T b_ / p >。
AdventuresInStochasticProcessesSolutionManual
Adventures In Stochastic Processes Solution ManualIf searching for a ebook Adventures in stochastic processes solution manual in pdf form, then you've come to the loyal website. We present complete edition of this ebook in ePub, txt, DjVu, PDF, doc forms. You can read Adventures in stochastic processes solution manual adventures-in-stochastic-processes-solution-manual.pdf either download. As well as, on our site you can read instructions and diverse artistic books online, either downloads theirs. We will to invite your attention what our website not store the eBook itself, but we grant link to site where you may download or read online. So that if have must to download Adventures in stochastic processes solution manual pdf adventures-in-stochastic-processes-solution-manual.pdf, in that case you come on to the right site. We have Adventures in stochastic processes solution manual PDF, DjVu, txt, doc, ePub forms. We will be glad if you get back us afresh.download ebooks tagged with theory stochastic - 418 download(s) theory stochastic processes solutions manual 400 download(s) resnick adventures in stochastic processes solution. 4 14 jan 2015. title: Yvonne Watts:book suggestions alongside adventures in - I am currently taking a SP course following Resnick's book. Are there any other books with exercises (and possibly solutions) I could also look at?stochastic process ross solutions pdf stochastic - Basic stochastic processes free basic stochastic processes adventures in stochastic processes processes solutions manual ebook stochasticapplied probability and stochastic processes - Applied Probability And Stochastic Processes Solution Manual downloads at Sidney I. , Adventures in Stochastic Processes.essentials of stochastic processes durrett - essentials of stochastic processes durrett solutions porn www download adventures. 9:00am on stochastic processes, a solution manual ez. Cssmath 632 - introduction to stochastic processes - Introduction to Stochastic Processes. Spring 2014 Meetings: Adventures in Stochastic Processes. but you have to write up your own solution.adventures in stochastic processes - springer - Adventures in Stochastic Processes. Authors: convinced the reviewer that it very likely that the Adventures will beocme a widely used, Solution Manual; Freestochastic processes ross solution download - - stochastic-processes-ross-solution-download.pdf . Related Entries . DISCRETE STOCHASTIC PROCESSES, CHAPTER 2: POISSON PROCESSES. Date Shared: 13, 2015 | Filetypeadventures in stochastic processes solution - An Exact Value for Avogadros N: 1.40MB PDF Document: and sea adventures and frequent scientific His research interests include stochastic processes inadventures in stochastic processes - Adventures in Stochastic Processes Sidney I. Resnick Stochastic processes are necessary ingredients for building models of a wide variety of phenomena exhibiting timeessentials of stochastic processes durrett - Essentials Of Stochastic Processes Durrett Solution Manual books, ebooks, manuals and documents at EDU Libs. ADVENTURES IN STOCHASTIC PROCESSES SOLUTION MANUAL.stochastic processes - stanford university - Stochastic Processes (MATH136/STAT219, Autumn 2013) All solutions posted!, Schedule (Read corresponding sections of notes before class):adventures in stochastic processes - pirates wrc - Adventures in Stochastic Processes by Sidney I. Resnick English / 627 pages ISBN: 978-1461267386 Rating: 4.3 / 5 Download Size: 8.90 MB Format: ePub / PDF / Kindleresnick adventures in stochastic processes - Resnick Adventures In Stochastic Processes Solution Truck Nozzle. RESNICK ADVENTURES IN STOCHASTIC PROCESSES SOLUTION. DOWNLOAD: RESNICK ADVENTURES IN STOCHASTICread: adventures in stochastic processes solution - File type: PDF ; File size: n/a; File name: adventures-in-stochastic-processes-solution-manual.pdf; Source: solutions manual _ probability, random variables and - Solutions Manual _ Probability, Random Variables and Stochastic Processes Solutions Sports & Adventure. Travel. P. 1.author : david nualart - Moved Permanently. The document has moved here.adventures in stochastic processes solutions - Apr 12, 2008 reduce the suffering students in the universities to find solutions to their books. manual. Advanced Engineering Mathematics, 9th Edition By Erwinfree download here - SHELDON ROSS STOCHASTIC PROCESSES SOLUTION MANUAL | Ebook RESNICK ADVENTURES IN STOCHASTIC PROCESSES SOLUTION. Download Resnick Adventures In Stochasticadventures in stochastic processes solution - ADVENTURES IN STOCHASTIC PROCESSES SOLUTION MANUAL E-BOOKS RIGHT ADVENTURES IN STOCHASTIC PROCESSES SOLUTION MANUAL This publication consists of detailed informations. i. resnick, adventures in stochastic processes - ties and expected absorption times were calculated as solutions of difference equations; in this chapter, S. I. Resnick, Adventures in Stochastic Processesresnick adventures in stochastic processes - Resnick Adventures In Stochastic Processes Solution Truck Nozzle. RESNICK ADVENTURES IN STOCHASTIC PROCESSES SOLUTION. DOWNLOAD: RESNICK ADVENTURES IN STOCHASTICadventure in prolog amzi pdf d6 adventure - adventure in diving manual pdf. Jump to prolog pdf prolog programming artificial intelligence pdf adventures in stochastic processes solution manual pdf;Related PDFs:vector mechanics dynamics 9th edition solution manual, preventive maintenance checklist manual milling machine, engineering graphics with autocad 2015 solution manual, stihl 084 av manual, hp 3468a service manual, mcgraw hill chemistry study guide, accounts payable procedures manual sample, atlas copco compressor manual xas 36, elements of quality manual first page, e one fire truck manuals, jcb 801 operators manual, taping guide, pontiac manual workshop 1960, 1990 mercury outboard service manual 150 hp, honda cbf 600 hornet service manual, bizerba slicer vs 12 d service manual, stihl 041 service manual, toyota 5k workshop manual, injection mold manual, solution manual shackleford, mori seiki lathe programming manual cl2015, three sovereigns for sarah study guide, pajero transmission repair manual, manual for condition evaluation bridge, suzuki 125 lt manual, yamaha jet boat repair manual, nys parole officer exam study guide, yamaha yfm350 big bear 350 manual , b727 operators manual, honda 2015 shadow manual, mf 1220 operating manual, suzuki gs450t manual, chilton repair manual nissan titan, 7th grade world history finals study guide, 1995 haas vf2 manual, heat transfer 10th edition solutions manual, allyn bacon guide to writing syllabus, 1992 honda accord repair shop manual original, qatar highway design manual, biology 120 study guide。
武汉大学金融学(数理)培养方案-推荐下载
金融学专业攻读硕士学位研究生培养方案一、培养目标培养德、智、体全面发展,适应社会主义市场经济需要,具有坚实的现代金融学理论基础和专业技能,能够从事银行、证券、保险、信托等金融领域理论研究和实际工作,具有开拓创新精神的高层次学术型或应用型人才。
要求学生身心健康;坚持四项基本原则,坚持改革开放,具有良好的马列主义理论素养和职业道德;具有坚实的现代经济和金融学、数理和计量经济学基础;具有系统、全面的金融基础理论和专业知识,了解现代金融领域的发展前沿,熟练运用现代数理和计量分析技术,善于以开拓精神从事金融实际业务工作;具有较高的外语水平,能运用英语(或其它一种外语)熟练地阅读专业文献和最新信息,并具有较好的听、说、写、译能力,能听懂用英语教学的专业课内容,略通第二外语。
二、研究方向1.货币金融学该方向的主要研究内容为:货币理论、利率理论、金融市场、通货膨胀理论、货币政策及其应用、金融发展理论等。
2.国际金融该方向的主要研究内容为:国际收支及调节、汇率理论、国际储备管理、货币货币体系及其改革、开放经济环境下的宏观经济政策等。
3.公司金融该方向的主要研究内容为:公司资本预算(投资决策)、资本结构理论(融资决策)、营运资本管理、企业并购、行为公司金融理论等。
4.投资学该方向的主要研究内容为:资产组合理论、资本市场投资决策、资本市场理论、行为金融理论等。
5.数理经济与数理金融数理经济与数理金融专业突出现代数理经济学、数理金融学的基础理论、前沿思想和研究工具的教学和训练,要求学生通过规范的理论学习和实践操作,掌握现代数理经济学和金融学的理论模型、前沿进展和重要分析工具。
该方向的主要研究内容为:数理宏观经济学、数理微观经济学、资产定价与数理金融分析、计量与数理分析。
三、学习年限1、学制为三年,最长学习年限不超过四年。
其中课程学习1.5年。
2、申请提前毕业的硕士研究生在校学习年限不得少于两年。
4、课程设置及学分要求本专业应修满的总学分为42学分,其中:课程总学分30学分(包括公共必修课5学分,学科通开课8学分,研究方向必修课6学分,其余为选修课学分,其中公共选修课0-2学分);实践环节2学分;学位论文10学分。
人工智能原理MOOC习题集及答案北京大学王文敏
Quizzes for Chapter11单选(1分)图灵测试旨在给予哪一种令人满意的操作定义得分/总分A.人类思考B.人工智能C.机器智能1.00/1.00D.机器动作正确答案:C你选对了2多选(1分)选择以下关于人工智能概念的正确表述得分/总分 A.人工智能旨在创造智能机器该题无法得分/1.00B.人工智能是研究和构建在给定环境下表现良好的智能体程序该题无法得分/1.00C.人工智能将其定义为人类智能体的研究该题无法得分/1.00D.人工智能是为了开发一类计算机使之能够完成通常由人类所能做的事该题无法得分/1.00正确答案:A、B、D你错选为A、B、C、D3多选(1分)如下学科哪些是人工智能的基础?得分/总分A.经济学0.25/1.00B.哲学0.25/1.00C.心理学0.25/1.00D.数学0.25/1.00正确答案:A、B、C、D你选对了4多选(1分)下列陈述中哪些是描述强AI(通用AI)的正确答案?得分/总分A.指的是一种机器,具有将智能应用于任何问题的能力0.50/1.00 B.是经过适当编程的具有正确输入和输出的计算机,因此有与人类同样判断力的头脑0.50/1.00C.指的是一种机器,仅针对一个具体问题D.其定义为无知觉的计算机智能,或专注于一个狭窄任务的AI正确答案:A、B你选对了5多选(1分)选择下列计算机系统中属于人工智能的实例得分/总分A.Web搜索引擎B.超市条形码扫描器C.声控电话菜单该题无法得分/1.00D.智能个人助理该题无法得分/1.00正确答案:A、D你错选为C、D6多选(1分)选择下列哪些是人工智能的研究领域得分/总分A.人脸识别0.33/1.00B.专家系统0.33/1.00C.图像理解D.分布式计算正确答案:A、B、C你错选为A、B7多选(1分)考察人工智能(AI)的一些应用,去发现目前下列哪些任务可以通过AI来解决得分/总分A.以竞技水平玩德州扑克游戏0.33/1.00B.打一场像样的乒乓球比赛C.在Web上购买一周的食品杂货0.33/1.00D.在市场上购买一周的食品杂货正确答案:A、B、C你错选为A、C8填空(1分)理性指的是一个系统的属性,即在_________的环境下做正确的事。
Stochastic Process Assignment 1
Suppose we know that the number of items produced by in a fatory during aweek is a random variable with mean 500.What can be said about the probability that this week’s production will be at least 1000?
S OLUTION
Solution:From the Markov’s Inequality,we have P {P roduction ≥ 1000} ≤ E [P rodution]/1000 = 0.5 . 2
E XERCISE 1.4
There are n types of coupons.Each newly obtained coupon is,independently,type i with probability pi , i = 1, 2, . . . , n.Find the expected number and the variance of the number of distinct types obtained in a collection of k coupons.
Page 4 of 11
= 1 − ((1 − pi )k + (1 − pj )k − (1 − pi − pj )k ) − (1 − (1 − pi )k )(1 − (1 − pj )k ) = (1 − pi − pj )k − (1 − pi )k (1 − pj )k . Thus,the Varriance of N can be obtained. 2
流行病与卫生统计学100401(卫生统计学方向)-南方医科大学攻读
南方医科大学攻读硕士学位研究生培养方案一、专业名称流行病与卫生统计学(卫生统计学方向)Epidemiology and Statistics(Statistics)二、培养目标培养政治合格、德才兼备、身心健康的适应卫生统计学教学、科研和卫生统计工作发展需要的高级专业人才。
1.爱祖国、爱人民, 遵纪守法,有良好的道德品质和强烈的事业心,立志为国家建设服务。
2.掌握扎实的基础理论知识和系统的专业技术知识,熟悉本专业的现状和动态,能独立从事较高层次的与本专业相关课题的研究,胜任卫生统计学的教学和业务。
3.能熟练运用英语阅读本专业文献和撰写论文,并有较强的英语听说能力。
4.身心健康,能够承担繁重的工作和学习任务。
三、研究方向1.量表的研制、应用与评价2.管理统计方法的研究与应用3.临床试验统计方法的研究四、招生对象招收具有医学、数学、计算机学、管理学的应届生或在职的临床、教学和科研工作者,具有本科学历或相当本科学力。
五、学习年限实行弹性学制,一般为3-5年,基本学制为3年。
六.课程设臵(必修课程、选修课程、教学实践)(一)必修课1.社会主义理论与实践(T h e o r y a n d P r a c t i c e S o c i a l i s m)36学时1学分2.自然辩证法(N a t u r a l D i a l e c t i c)60学时3学分3.英语(E n g l i s h)120学时5学分4.医用统计学(M e d i c a l S t a t i s t i c s) 81学时3学分(二)专业基础课1.SAS统计软件(SA S)100学时3学分2.数理统计(M ath ema tic al sta tis tic s)(非数学背景)100学时3学分3.高等代数(L ine ar alg ebr a)(非数学背景)100学时3学分4.概率论(Pro bab ili ty)(非数学背景)100学时3学分5.流行病学(E pid emi olo gy)(数学背景)100学时3学分(三)专业课1.生物统计学(Biostatistics) 200学时5学分(四)选修课1.计算机语言(Computer language) 30学时1学分2.文献检索(Document Retrieval)66学时2学分3.科研方法(Met hod of Re sea rch)40学时 1.5学分4.SPSS统计软件应用(Application of SPSS ) 33学时 1.5学分5.心理学(Psychology) 20学时1学分6.模糊数学(Fuzz) 30学时1学分7.计算方法(Mathematics of computation) 30学时1学分8.随机过程(Stochastic process) 30学时1学分9.数据挖掘(Data mining)30学时1学分10.生物信息学(Bioinformatics)48学时 1.5学分11.抽样调查(Sampling survey) 30学时1学分(可由导师根据研究生知识结构和研究生课题需要酌情选择,适当增减,不得少于12学分)(五)前沿讲座第二年以后参加科室的所有学术活动(讲座、学术会议汇报会、国内外专家的讲学等),以了解本专业前沿和动态。
Stochastic Processes, Detection, and Estimation2
where “⊥” denotes convolution, and where the w [n]’s are zero-mean statistically independent Gaussian random variables with variance δ 2 . The two signals, s0 [n] and 4
(d) Are y and w Gaussian random variables? Are they jointly Gaussian? Explain. Problem 2.2 Let x1 and x2 be zero-mean jointly Gaussian random variables with covariance matrix ⎦ � 34 12 �x = . 12 41 (a) Verify that �x is a valid covariance matrix. (b) Find the marginal probability density for x1 . Find the probability density for y = 2x1 + x2 .
s1 [n] are shown in Figs. 6-2 and 6-3, respectively. The parameter � in Fig. 6-3 satisfies 0 ∼ � ∼ 1. The impulse response of the linear time invariant filter h[n] is shown in Fig. 6-4.
a � πi (i.e., aT πi = 0) for i = 1, 2, · · · , k − 1,
for some k √ 2. Again, indicate the value(s) of a for which the maximum is achieved. Problem 2.4 Suppose x and y are random variables. Their joint density, depicted below, is con stant in the shaded area and 0 elsewhere.
StochasticProcessesRossSolutionsManual-…
Stochastic Processes Ross Solutions ManualIf looking for the book Stochastic processes ross solutions manual in pdf format, then you have come on to the rightsite. We present utter variation of this book in DjVu, txt, PDF, ePub, doc forms. You may reading Robinair model34134z repair manual online stochastic-processes-ross-solutions-manual.pdf or download. Additionally, on our websiteyou may read the guides and another art books online, or downloading their. We wish to invite attention that oursite does not store the eBook itself, but we give ref to site wherever you may download either read online. So ifyou have must to download pdf Stochastic processes ross solutions manual stochastic-processes-ross-solutions-manual.pdf,then you have come on to loyal website. We have Stochastic processes ross solutions manual doc, DjVu, ePub, txt,PDF formats. We will be glad if you go back us afresh.stochastic processes solutions manual to - Get this from a library! Stochastic processes Solutions manual to accompany stochastic processes. [Sheldon Mark Ross]ross stochastic processes solution manual - Description Date Speed Downloads; PROBABILITY AND STOCHASTIC PROCESSES solution manual pdf Download by raj0386sheldon ross stochastic processes solution manual - sheldon ross stochastic processes solution manual at - Download free pdf files,ebooks and documents of sheldon ross stochastic processes solution manualsolutions to sheldon ross stochastic processes | - Sheldon M. Ross, Stochastic Processes, John Wiley and Sons.3. S A First Course in Probability, 8th Edition, Ross, Solutions Manual. A First Course ininstructor solutions manual pdf download ~ - Stochastic processes ross solution manual, instructor's manual to accompany. introduction to. probability models. tenth edition. sheldon m. ross. university ofstochastic processes: an introduction, second - Stochastic Processes: An Introduction, Second Edition Solutions manual available for qualifying instructors . Stochastic Processes in Science,roy d. yates solutions manual probability and - Probability and Stochastic Processes Solutions Manual to accompany Corporate Finance By Stephen A. Ross 6 edition Solutions Manual to accompany Investmentessentials of stochastic processes solution manual - Essentials Of Stochastic Processes Solution Manual Pdf books, ebooks, manuals and documents at EDU Libs. Stochastic Processes Ross Solutions Manual.pdffundamentals of probability with stochastic - Fundamentals of Probability With stochastic processes 3/e (Solutions Manual ) By Saeed Ghahramani A First Course In Probability Solution Manual,Ross 6thsheldon ross, stochastic processes 2nd ed - Stochastic Processes (9780471120629): Sheldon M. Ross: the exercises should come with complete worked solutions. Read more Published on March 9,(2011) sheldon m ross stochastic process 2nd - (2011) Sheldon M Ross Stochastic Process 2nd Edition Solution Manual > /kmp7hp8introduction to stochastic processes - - Sheldon M. Ross. Stochastic processes. Very good book on stochastic processes, An Introduction to the Numerical Solution of Markov Chains.ross stochastic processes solution manual | - Download links for Ross Stochastic Process Solution .doc MSWord Document. Download links for Solution Manual Stochastic Processes Sheldon Ross .doc MSWord Document.ross solution manual stochastic processes - foursquare gospel bible lessons jeep grand cherokee wire context clues module gateway m360 manuals carrier how to get superheat pltw engineer crossword 1.1 amswersprobability - stochastic processes solution - Stochastic Processes Solution manuals. stash of solution manuals for stochastic processes Models by Sheldon M. Ross. It has the 10 edition solution manualstochastic processes ross solution manual - - Document/File: , filesize: n/a. Filetype: PDF. A Poisson process is a simple and widely used stochastic process for modelingstochastic processes solution manual - manuals - Stochastic processes solution manual. (Solution Manual) Sheldon Ross Probability and Stochastic Processes Solutions Manual 255 Problem 9.4.4 Thesesolutions manual _ probability, random variables and - Solutions Manual _ Probability, Random Variables and Stochastic Processes Solutions - Papoulis.2002 - Free ebook download as PDF File (.pdf), Text file solutions manual to accompany stochastic - Get this from a library! Solutions manual to accompany Stochastic processes. [Sheldon M Ross]probability and statistics ser.: stochastic - Probability and Statistics Ser.: Stochastic Processes : Solutions Manual by Sheldon M. Ross (1983, Paperback, Teacher's Edition of Textbook) (Paperback, 1983)probability stochastic processes solution manual | - Stochastic Process Ross Solution Manual Stochastic Processes Ross Solutions Manual | Probability And Stochastic Processes Solutions Manual rapidshare Related PDFs:principles of accounting 101 reeves solutions manual, 1995 polaris sportsman 400 service manual, peugeot 207 manual english, case ih rbx 562 operators manual, jsc lecture guide, carrier ac wall remote control manual model 68rv1112a, generac 15000 generator manual, cub cadet 1220 hydro manual, modern biology study guide answers chapter17, entrepreneurship hisrich 8th edition study guide, study guide for electrician apprentice eti, haas vf oe manual, 1996 yamaha 1300cc royal star manual, dibal l series service manual, 2015 vw jetta sportswagen tdi repair manual, mariner 15 hk 2015 manual, gilera runner 50 poggiali manual, pyxis guide console, fanuc powermate parameter manual, quiznos sandwich make guide, daisy 4500 manual, kenworth parts manual k100,p90x owners manual, yamaha it 200 service manual, quantitative chemical analysis 7th edition solution manual, kasea mighty mite 50 service manual, tennessee kindergarten pacing guide, ford ranger gearbox repair manual, 1987 1997 kawasaki zx600 750 service repair manual download, massey ferguson 699 service manual, wood badge training manual, troy bilt lawn mower repair manuals, kubota b7100 hst manual, maths golden guide of class 9 ncert, manual for 4360 ford diesel tractor, 1998 mercedes ml320 owners manual, nissan 15 electric forklift service manual, mechanical engineering design solutions manual 9th edition, carrier 58pav070 12 manual, weed eater ge21 repair manual。
II_Stochastic_process-金融随机分析
11(1)()1,...,1,...(1)(...)()k k t t t t k v F F F F σσσσν--⨯⨯=⨯⨯ ()K1for all permutations σon {1,2,...k}K2)()(1,,,,,1,111n n k t t t t k t t R R F F v F F v m k k k k ⨯⨯⨯⨯=⨯⨯++ for all m ∈N ,where (of course ) the right hand side has a total of k+m factors.Then there exists a probability space (Ω,F , P) and a stochastic process {X t } on Ω, s.t.,:n t R X →Ω],,,[)(11,,11k t t k t t F X F X P F F v kk ∈∈=⨯⨯ for all t i ∈T and all Borel sets F i .6DEFINTION 2.2.1An n-dimensional stochastic process {M t }t ≥0on ( , F ,P)is called a martingale (resp.submartingale, supermartingale) with respect to a filtration {F t }t ≥0(and with respect to P 0) if(Ⅰ) {M t } is F t -adapted(Ⅱ) E[| M t |]<∞for all t, and(III) E[M t | F s ]= M s (resp. ≥,≤), a.s. , for all s ≤t .(Note:If t ∈T={0,1,2,….},then {M t } is a martingale (resp. submartingale, supermartingale) if and only ifE[M k+1| F k ]= M k (resp. ≥,≤), a.s.It is clear that any martingale must be both a sub-and auper-martingale.2.2 martingales7 Some examples of martingale Example2.2.2 Let {ξn ,n ≥1} be a random process on (Ω, F ,P), F n =σ(ξ0,…, ξn ), if E(ξn+1| F n )=0, Let∑==nk k n X 0ξExample 2.2.3Let ξn be an independent random process with mean 1,then {X n ,n ∈N} is a martingale.∏==ni in X 1ξExample 2.2.4Let ξbe a random variable on (Ω, F ,P), F n be a filtration on (Ω,F ),then{X n =E(ξ| F n ), n ∈N} is a martingale.,then, {X n ,n ∈N} is a martingale.if ξn is nonnegative then {X n ,n ∈N} is a submartingale.8PROPERTIES OF MARTINGALETHEOREM 2.2.5Let M t be a submartingale (resp. martingale). Then E(M t ), as a function of t, is nondereasing.(resp. a constant)In particular, when X t is a martingale and E[ |X t |p ]<∞for some p ≥1. Then {|X t |p } is a submartingale.THEOREM 2.2.6Let X t ,Y t be F t -submartingales (resp. martingales). Theni)for all a ≥0,b ≥0, aX t +bY t is F t -submartingale (resp. martingale).ii){ X t ∨Y t } is F t -submartingale.iii)Let ϕ: R →R a nondereasing convex function ( resp. convex function) such that E[ϕ(X t )] exists for all t ≥0. Then ϕ( X t ) is a submartingale.{}{},t t P t B x P B x τ≤<=>with probability 1. The expected time to为X 的矩母函数.(moment generating function)定义* 对随机变量X 及其分布函数F(x),若积分在某一区间上存在且有限,则定义区间上的函数()x e dF x α∞--∞⎰()12,αα()12,t t ()()()x X m e dF x E e ααα∞---∞==⎰。
Stochastic Processes
French Chinese Class International Master of ScienceinElectronics &Telecommunications Signal & Image ProcessingMicroelectronicsSTOCHASTIC PROCESSES (1)INTRODUCTION (1)MEAN & CORRELATIONS (2)P ROPERTIES (3)E XAMPLE (4)STATIONARY STOCHASTIC PROCESSES (4)SYSTEMS WITH STOCHASTIC INPUTS (6)L INEAR T IME I NVARIANT S YSTEMS (L INEAR F ILTER) (7)Output Statistics (8)LTI S YSTEMS WITH W IDE-S ENSE S TATIONARY P ROCESSES (10)Correlations and Spectra (11)DISCRETE-TIME STOCHASTIC PROCESS (14)RANDOM VARIABLE ESTIMATION (16)MEAN SQUARE ESTIMATION (17)N ONLINEAR M EAN S QUARE E STIMATION (17)L INEAR M EAN S QUARE E STIMATION:O RTHOGONALITY P RINCIPLE (18)Examples (20)DETECTION OF SIGNAL IN RANDOM NOISE (22)B AYES APPROACH FOR HYPOTHESIS TESTING (25) (27)Maximum likelihood criterionM INIMUM PROBABILITY OF ERROR CRITERION (28)Stochastic ProcessesIntroductionLet ξ denote the random outcome of an experiment. To every such outcome suppose a waveform (),X t ξ is assigned.(,)n X t ξ(,)k X t ξThe collection of such waveforms forms a stochastic process (or a random process). The set of {}k ξ and the time index t can be continuous or discrete (countable infinite or finite) as well. For fixed i ξ∈Ω (the set of all experimental outcomes), (),X t ξ is a specific time function. For fixed t , (11,i X X t )ξ= is a random variable. The ensemble of all such realisations over time represents the stochastic process ()X t . For example()()0cos 2X t a f t πϕ=+, where ϕ is a uniformly distributed random variable in the interval[]0,2π represents a stochastic process. Stochastic processes are everywhere: Temperaturevariations, stock market fluctuations, various queuing systems all represent stochastic phenomena.If ()X t is a stochastic process, then for fixed t , ()X t represents a random variable. Its distribution function is given by :()(){}(),X F x t PX t x =< (1)1(,)X t ξ2(,)X t ξNotice that depends on t , since for a different t , we obtain a different random(,X F x t )()X t represents the first-order probability density function of the process .()X t ()1For and , 1t t =2t t =1X X t = represents two different random variables and()22X X t = respectively. Their joint distribution is given by:()()(){}()12121122,,,,X F x x t t PX t x X t x =<< (3)X t represents the second-order density function of the process .Similarly:()1212,,,,,X n n p x x x t t t (5)()X t represents the n th order probability density function of the process . Complete specification of the stochastic process ()X t requires the knowledge offor all ()1212,,,,,X n n p x x x t t t ,1,2,...,i t i n = and for all n . (an almost impossible task inpractice). ()X t ()Y t The joint statistics of two stochastic processes and are determined by means of the joint distribution of the random variables:()()()()()(''11221122, , , ,, , , n n m )'m X X t X X t X X t Y Y t Y Y t Y Y t ======()()()Z t X t iY t =+ A complex stochastic process is characterised by means of the joint statistics of the real processes ()X t ()Y t and .A vector stochastic process (i.e. n -dimensional stochastic process) is a family of n stochastic processes.Mean & Correlations()X t The mean (or average, or expected value) of a Stochastic Process is given by:()()(),X X m t X t x p x t dx +∞−∞=⋅⎡⎤⎣⎦∫E (6)In general, the mean of a process can depend on the time index t .()X t The autocorrelation function of a process is defined as:()()()*1212,XX R t t X t X t ⎡⎤⋅⎣⎦ E (7)()11X X t =The autocorrelation represents the interrelationship between the random variables and ()2()X t 2X X t = issued from the random process .()X t The autocovariance of the random process is given by:()()()()*121212,,XX XX X X C t t R t t m t m t =− (8)()X t ()Y t The cross-correlation of two processes and is defined as:()()()*1212,XY R t t X t Y t ⎡⎤⋅⎣⎦ E (9)Similarly,()()()()*121212,,XY XY X Y C t t R t t m t m t =− (10)is their cross-covariance .()X t and are called mutually orthogonal if: ()Y t Two processes()1212,:,XY t t R t t ∀∈= 0 (11)They are said to be uncorrelated if:()()()(*1212121212,:,0,:,XY XY X Y t t C t t t t R t t m t m t ∀∈=⇔∀∈= ) (12)PropertiesThe correlation function satisfies the following properties:()()()()(**212112,,YX XY )R t t Y t X t R t t ⎡⎤=⋅=⎣⎦E (Hermitian symmetry) • ()()()2,XX R t t X t ⎡⎤=⎣⎦E 0> is the mean instantaneous power. •1{}n i i a =• For any sequence of constants the autocorrelation function verifies theinequality 1: . Thus, the autocorrelation function ()*11,XX nni j i j i j a a R t t ==≥∑∑0)(21,XX R t t is anonnegative definite function.Example()()()0cos 2sgn X t a f t t πϕ=+⋅Let us consider , where ϕ is a uniformly distributed random variable in the interval []0,2π.For this random process the mean is given by:()()()()()()020sgn cos 21sgn cos 202X m t X t a t f t a t f t u du ππϕππ==⋅+⎡⎤⎡⎣⎦⎣=⋅+=∫E E ⎤⎦And the autocorrelation function is given by:()()()()()()()()()()()()()()()(){}()()()()212120102212120120122121201201221212012,sgn cos 2cos 2,sgn cos 22cos 22,sgn cos 22cos 22,sgn cos 22XX XX XX XX R t t a t t f t f t a R t t t t f t t f t t a R t t t t f t t f t t a R t t t t f t t πϕπϕπϕππϕππ=⋅+⋅+⎡⎤⎣⎦⎡⎤=⋅+++−⎣⎦⎡⎤⎡=⋅+++−⎣⎦⎣=⋅−E E E E ⎤⎦Stationary Stochastic ProcessesStationary processes exhibit statistical properties that are invariant to shift in the time index. Thus, for example, second-order stationarity implies that the statistical properties of the pairs ()(){}12,X t X t ()(){}12,X t X t ττ++ and are the same for any τ. Similarly first-order stationarity implies that the statistical properties of ()1X t ()1X t τ+ and are the same for anyτ.In strict terms, the statistical properties are governed by the joint probability density function. Hence a process is n th -order Strict-Sense Stationary if:()()12121212,,, ,,,,, ,,X X n n n n p x x x t t t p x x x t t t τττ≡+++ (13)X t 1This can be derived by noticing that()21||0 for ni i i Y Y a =≥=⎡⎤⎣⎦∑Efor any τ, where the left side represents the joint density function of the random variables ()()()1122, , , n n X X t X X t X X t === and the right side corresponds to the joint density function of the random variables ()()(1122', ', , 'n n X X t X X t X X t )τττ=+=+= +. ()X t A process is said to be Strict-Sense Stationary (S.S.S) if the joint probability density function satisfies the equation Eq. 13 above for all 1,2,...n =,1,2,...,i t i n = and for any τ. ;()(,,X X p x t p x t )τ≡+For a first-order strict sense stationary process , we have for any τ. In particular, by setting τ = – t we obtain:()(),,0,X X p x t p x t =∀∈ (14)()X t is independent of t . This means that the first-order probability density function ofIn this case, the mean value of the random process is independent of time index t :()(),0X X X t x p x dx +∞−∞=⋅=⎡⎤⎣⎦∫E m (15)Similarly, for a second-order strict-sense stationary process we have()(12121212,,,,,,X X p x x t t p x x t t )ττ≡++ for any τ. In particular, for 2t τ=− we have:()()12121212,,,,,,0X X p x x t t p x x t t ≡− (16)Thus, the second-order probability density function of a strict sense stationary process ()X t depends only on the difference of the time indices 12t t τ−=.In this case the autocorrelation function is only function of the difference of the time indices 12t t τ−= :()()()()()*12121212,,XX XX X R t t X t X t R t t t t ⎡⎤=⋅⎣⎦=Γ−E (17).()X t Notice that Eq. 15 and Eq. 17 are consequences of the stochastic process being first and second-order strict sense stationary. On the other hand, the basic conditions for the first and second order stationarity (i.e. Eq. 14 and Eq. 16) – are usually difficult to verify. Therefore, we often resort to a weaker definition of stationarity, known as Wide-Sense Stationarity , by making use of Eq. 15 and Eq. 17 as the required conditions.()X t is said to be Wide-Sense Stationarity (W.S.S) if: Hence, a Stochastic Process()()()()()(*121212 ,X X XX X m t X t m )R t t X t Xt t t ==⎡⎤⎣⎦⎡⎤=⋅=Γ⎣⎦E E − (18)()X t This means that for wide-sense stationary processes , the mean is a constant and the autocorrelation function depends only on the difference between the time indices. These last equations do not bring any information about the nature of the probability density functions, and instead deal with the average behaviour of the process. Since Eq. 18 follow from Eq. 14 and Eq. 16, strict-sense stationarity always implies wide-sense stationarity. However, the converse is not true in general, the only exception being the Gaussian process. In fact, as ()X t ()()()1122, , , n n X X t X X t X X t === is a Gaussian process, then by definition are jointly Gaussian random variables for any whose joint characteristic function is given by:12,,n t t t ()()()1,/212,,,nnk k XX i k i k k i kXjm t C t t n eωωφωωω=−−∑∑∑= ω (19)()X t which is given here by: Where is the autocovariance of the process XX C()()21212XX XX X C t t t t m −=Γ−− (20).()()()1122, , , n n X X t X X t X X t === Thus, the random variables and the random variables ()()()1122',', , 'n n X X t X X t X X t ττ=+=+=+ τ have the same joint probability density function for any , for all n and for any τ. This demonstrates the strict sense stationarity of Gaussian processes from its wide-sense stationarity. As a result, if 12,,n t t t ()X t is a Gaussian process, then wide-sense stationarity (w.s.s) implies strict-sense stationarity (s.s.s)2.Systems with Stochastic Inputs(),i X t ξA deterministic system transforms each input waveform into an output waveform()(){},k Y t S X ,k ξξ=i by operating only on the time variable t . Thus a set of realizations(){},,1,2,...,iX t i ξ=()X t n , corresponding to a process , at the input generates a new set of realizations (){},,1,2,...,iY t i n ξ= at the output associated with a new process . Astochastic system operates on both of the variables t and ()Y t ξ.2Notice that since the joint probability density function of Gaussian random variables depends only on their second order statistics, which is also the basis for wide sense stationarity, we obtain strict sense stationarity as well.1(,)X t ξ2(,X t ξ(,k X t ξ(,n X t ξ(,n Y t ξ(,k Y t ξ(2,Y t ξ(1,Y t ξThe main goal is to investigate the output process statistics in terms of the input process Linear Time Invariant Systems (Linear Filter)statistics and the system function. Here, we focus on deterministic linear time invariant systems.{}S • represents a linear system if:{}{}{}11221122S a X a X a S X a S X +=+ (21)where ()1X t ()2X t and are two input signals. Thus a linear combination of inputs results in e linear combinat the sam ion of outputs.Let {}Y S X = be the output of the system (i.e. the response of the system to the input signal X ). The system represented by {}S • is said to be time-invariant system (or homogeneous) if:000{}{} for any time shift t t Y S X S X Y t =⇒= (22)where ()()()()0000 and t t X t X t t Y t Y t t =−=−. Hence, any time shift in the input leads to e time shift in the output. 0t the sam{}S •If satisfies both Eq. 21 and Eq. 22 then it is said to be a linear time-invariant (LTI) tem For any LTI system, the output to Dirac distribution (i.e. delta function) is called impuls sys or a linear filter .e response of the filter. It allows for determining the output of the filter to any arbitrary input signal as follows:{}{}h S S X h X δ=⇒=∗ (23)here denotes the convolution product: (24)w ∗()()()()()Y h X Y t h t u X u du h u X t u du +∞+∞−∞−∞=∗⇔=−=−∫∫()t δ()h t Dirac distribution Impulse response of the linear filterOutput StatisticsUsing the input-output relationship of a linear filter, an expression of mean value, autocor (25)relation function of output random process and cross-correlation of input and output processes can be derived.()()()()()()()()()()Y X X m t Y t X u h t u duX u h t u du m u h t u dum h t +∞−∞+∞−∞+∞−∞==−⎡⎤⎡⎤⎣⎦⎣⎦=−⎡⎤⎣⎦=−=∗∫∫∫E E E ()()()()()()()()()()()()()()()()222*1212**12**12*12*1212,,,YY XX XX R t t Y t Y t X t u X t v h u h v dudv X t u X t v h u h v dudv R t u t v h u h v dudvR h h t t ⎡⎤=⎣⎦⎡⎤=−−⎢⎥⎢⎥⎣⎦⎡⎤=−−⎣⎦=−−=∗∗∫∫∫∫∫∫E E E(26)()()()()()()()()()()()()()*1212**12**12*12*212,,,XY XX XX R t t X t Y t X t X t u h u du X t X t u h u du R t t u h u duR h t t ⎡⎤=⎣⎦⎡⎤=−⎢⎥⎣⎦⎡⎤=−⎣⎦=−=∗∫∫∫E E E (27) inputY h X X h =∗=∗More generally, let us consider two stochastic processes()1X t and ()2X t and twolinear filters with respectively and ()1h t ()2h t as impulse response. One can show using the eam approach as in Eq. 26 that:s ()()()()()()()()()()()()()()(22*1212,)()1221212*121122**112212**112212*112212,,Y Y X X X X R t t Y t Y t X t u X t v h u h v dudv X t u X t v h u h v dudv t v h t t ⎡⎤=⎣⎦⎡⎤=−−⎢⎥⎢⎥⎣⎦⎡⎤=−−⎣⎦−∗∫∫∫∫ E E E (28) R t u h u h v dudvR h =−=∗∫∫a white noise process if its correlation satisfiWhite Noise ProcessA Stochastic Process ()W t is said to be es: ()()()()()*1212112,WW R t t W t W t c t t t δ⎡⎤==⎣⎦E − (29)Thus, unless . The process ()12,0ww R t t =12t t =()W t is said to be wide-sense stationary white noise if: (X t ()Y t *12Y X XX YY R R h h =∗∗(1X t ()1t (2X t ()2t 1212*1122X X Y Y R R h h =∗∗()()()()12121212;,,,,WW W W t m =⎡⎤⎣⎦E W R t t c t t t t t t t δ=⋅−=Γ−∀∈ (30)Notice is riables. For a linear filter the response to a W.S.S white noise input that if the process ()W also Gaussian, then all of its samples are independent andom va t r()W t is called coloured oise. This output process, denoted by ()N t is also W.S.S. and verifies: n ()()()()ˆ0;COR ,,,N W N t m h c h h t ττ=Γ=⋅∀⎡⎤⎣⎦ E ∈ (31)ense Stationary ProcessesIf the input stochastic processLTI Systems with Wide-S()X t is Wide-Sense Stationary then()()X X m t X t m ==⎡⎤⎣⎦E and ()()()()*121212,XX X R t t X t X t t t ⎡⎤=⋅=Γ−⎣⎦E . So, from Eq. 25, we have:(32) ()()()()() ˆ0Y X X X Y t m t m h t u du m h v dv m h+∞+∞−∞−∞==−==⎡⎤⎣⎦∫∫E which means that the expected value of the process ()Y t is independent of t . Similarly, from Eq. 26, we have:()()()()()()()(2**12121,YY X )()()()()()()22*1212COR ,X X Y R t t Y t Y t t u v h u h v dudv τ⎡⎤==Γ−⎣⎦Γ∫∫E t x h u h u x dudxh h t t t t −−=−−=Γ∗−=Γ−∫∫ (33)This means that the autocorrelation of the output cess pro ()Y t is only function of the difference of the time indices 12t t τ−=. White noise()W t ()N tIn summary, the process is such that its mean is constant and its autocorrelation function is shift-invariant. Thus, the output process ()Y t ()Y t is also Wide-Sense Stationary.If the two stochastic processes ()1X t and ()2X t are jointly Wide-Se tationary processes (i.e. their cross-correlation is shift invariant) then the two output stochastic processes are also jointly Wide-Sense Stationary and Eq. 28 becomes:nse S ()()()()()()122*()()()12*12112212,COR ,Y Y X X R t t Y t Y t x h u h u x dudx τ=Γ−−∫∫()1212121212X X Y Y h h t t ⎡⎤=⎣⎦=Γ∗− E(34)t t =Γ−Correlations and Spectra()x t ()ˆxfFor a deterministic signal , the spectrum is well defined. In fact, if represents its Fourier transform, i.e.()()2ˆi ft xf x t e π+∞−−∞=∫dt (35)then ()2ˆxf represents its energy spectrum. This follows from Parseval’s theorem since the ignal energy is given by:s()()22ˆxf df x t dt +∞=∫∫(36)ence, +∞−∞−∞()[],f f df +2ˆH f df represents the signal energy in the frequency band x. for a stochastic process, ()()()2,0XX R t t X t ⎡⎤=⎣⎦E > repre the ensemble average power nstantaneous energy) at the instant t. To obtain the spectral distribution of power versus finite sents (i frequency for stochastic processes, it is best to avoid infinite intervals to begin with, and start with a interval [],,0T T T −+>. Formally, we introduce “local” Fourier transform of a process ()X t by ()()()2ˆi ft T T X f P t X t e π+∞−−∞=∫()[]1 if ,T P t t T T =∈−+dt where and zeroelsewhere. Therefore:()()()2222T TT−∞2ˆ1i ft X f P t X t e dt π+∞−=∫(37)represents the power distribution associated with that a realization of the process over [],,0T T T −+>. Notice that this quantity represents a random variable for every frequency f and its en le average g [semb ives, the average power distribution over the interval ],,0T T −+. Thus: T >()()()()()()()()()()()()()12122*1212122121212121,2XX i f t t i f t t P t P t X t X t e dt dt TP t P t R t t e dt dt Tππ+∞+∞−−−∞−∞+∞+∞−−−∞−∞⎡⎤=⎣⎦=∫∫∫∫E (38) 22+2 ˆ122T i X f E f P t X t e T T π∞−−∞⎡⎤⎡⎤⎢⎥==⎢⎥⎢⎥⎣⎦⎢⎥⎣⎦∫EEts the power distribution of ft dt [],,T T T ()X t represen over the interval 0−+>. For wide sense rocesses, it is possible to further simplify this last equation. In fact, if stationary p ()X t is W.S.S. then Eq. 38 becomes:()()()()()()()()()()222X T i f T T e d πττττ−+∞−⎝⎠=ΛΓ∫12212121222222121221T X XX i f t t T i f TTi f E f P t P t t t e dt dt T T ed Te d ππτπτττττττ+∞+∞−−−∞−∞−−−=Γ−=−Γ⎛⎞=−Γ⎜⎟∫∫∫∫(39)he power spectral density or power spectrum is obtained by taking the limits of Eq. 39:−∞T ()()()()2lim lim TXi f X TT T f Ef e d πτγττ+∞−−∞→+∞→+∞==ΛΓ∫ τ()()2Xi f X f e d πτγττ+∞−−∞=Γ∫ (40)the autocorrelation function and the power spectrum of a W.S.S. process form a Fourier transform pair, a relation known as the Wiener-Khinchin Theorem .particular, we get: In This means that the area under power spectral density represents the total power of the process ()X t .The nonnegative-definiteness property of the autocorrelation function translates into the “nonnegative” property for its Fourier transform i.e. the power spectrum: )()(()()()()()()**11112*11112,:0XXXk l X X X n n n nk lk lk lkl k l k l nni f t t k l k l k l k l na a R t t a a tt a a f edff f πγγ====+∞−−−∞==−∞===2*k l n n i f t t f a a e df πγ+∞−−21kX i ft k k f a edf πγ+∞−−∞=Γ−=⎜⎟⎝⎠⇒∀∈≥∑∑∑∑∑∑∫The autocorrelation of a W.S.S. process ⎛⎞=∑∑∫ (42)=≥∑∫()X t , denoted by X Γ, verifies:()()()()*0X X X X τττΓ−=ΓΓ≤Γ (43)The cross-correlation of two jointly W.S.S. processes ()X t ()Y t ,X Y Γ and , denoted by , verifies:()()*XY YX ττΓ−=Γ (44)ensityThe power spectrum , or spectral d , of a process ()X t has been defined above tocorrelation of the process. The ss-power spectrum of two jointly W.S.S. processes cro as the Fourier transform of the au ()X t and ()Y t is similarly defined as the Fourier transform of their cross-correlation:()()2i f XY XYf ed πτττ−∞=Γ∫ (45)+∞−γWe have shown that the LTI output ()Y t ()X t to S. process a W.S.is also W.S.S. quatioE ns 33 and 34 can be transformed into the Fourier domain:Disc Discrete-time random processes constitute the basis of most modern communications systems, digital signal processing systems and many other application areas,including speech and audio modelling for coding/noise-reduction/recognition, radar andsonar, stochastic control systems, Biomedical devices,... Most of the results for continuous time random processes follow through almost directly to the discrete-time domain.A discrete-time stochastic process rete-time Stochastic Process()X n is a sequence of random variables . Theean, autocorrelation and auto-covariance functions of a discrete-time process are gives by:m ()()X m n X n =⎡⎤⎣⎦E ()()()()()()()1212*121212,,X X XX XX XX C n n R n n m n m n ⎣⎦=−*,R n n X n X n ⎡⎤=⋅E (47)As for Continuous-time case, strict sense stationarity and wide-sense stationaritydefinitions apply here also. Thus, ()X n is wide sense stationary (W.S.S.) if its mean is a constan t:t ant its autocorrelation is shift invarian ()()X X m n X n m ()()()(*121212),XX X n n X n Xn n n ⎡⎤=⋅=Γ−⎣⎦E (48)==⎡⎤⎣⎦E RThe positive-definite property of the autocorrelation sequence is valid and can be expressed as follows: let 01[, , , ]Tm a a a a = be an arbitrary vector then . Thiscan als ()*110X mmi j i j i j a a n n ==Γ−≥∑∑o be stated as follows:Theorem : A sequence (){}n n ∈Γ forms an autocorrelation sequence of a wide sense tationary stochastic process ()X n 1,2,m =… s if and only if for every , the Hermitian-Toeplitz matrix given by:m T ()()()()()()()()(*†1 0 1 1 m m m ⎜⎟⎜⎟ΓΓΓΓ−Γ⎝ )()()()***0 12 1 10m m m T T ΓΓΓΓ⎛⎞==⎜⎟⎜⎟⎜⎟Γ−ΓΓ⎠very vectis positive definite i.e. for e or 01[,, , ]T m a a a a = , the quantity †0m a T a ≥.()X n r whose If represents a wide-sense stationary input to a discrete-time LTI system i.e. a linear filte impulse response is represented by the sequence ()Y n ()h n , and standsfor the system output , then as for continuous-time stochastic processes, the discrete-time process ()Y n is W.S.S and its mean and autocorrelation satisfy:()()()0Y X d Y n m =⎡⎤⎣⎦()()()()()()*ˆCOR ,X kY X m h k m h k Y n Y n k h h k ==⎡⎤Γ=−=Γ∗⎣⎦∑E (49E )wide-sense stationarity property ectral density , of a process from input to output is then preserved for discrete-time linear filter.As for continuous-time W.S.S., the power spectrum , or sp ()X n is defined as the discrete-time Fourier transform of the autocorrelation of the process. The power spectrum is a real , positive , even and periodic function of period equal to 1.The cross-power spectrum of two jointly W.S.S. processes ()X n ()Y n and is similarly discrete-time Fourier transform of their cross-correlation: defined as the()()2d i nf πγ−=ΓX X nf n e ∑ (50)()()2d i nfXY XY nf n e πγ−=Γ∑ (51)The cross-power spectrum is a periodic function of period equal to 1. rete-time wideWhite noise is defined in terms of its autocorrelation function. A disc ense stationary ()N n process is termed (discrete-time) white noise if: s()()()()()2212121212;,,,,NN N N N NdN n m R n n n n m n n n n n σδ==Γ−=+⋅−∀⎡⎤⎣⎦ E ∈ (52)where =⎧=⎨ is the impulse sequence and ()1;00 otherwised n n 2N σδ⎩ is the variance of the process.he pow d N N NNnkT er spectrum of a white noise is given by Eq. 50:)()()222i nf f n e m f k πδσ−=Γ=−+∑∑ (53)(γis a periodic power spectrum (period equal to one) composed of a flat line across all equency superimposed to a Dirac comb of period equal to one. It frWe have shown that the LTI output ()Y n to a W ()X n .S.S. process is also W.S.S. quations 49 can be extended and transformed into the Fourier domain or similarly into the Z domain:E Linear filtering of a (discrete) white noise results in a (discrete) coloured noise.Random Variable EstimationEstimating a parameter of a random process is of particular interest in many applications. It relies on probability theories of estimation. In this section we focus on the roblem of estimating a random variable, say , in terms of another random variable 3. Of p Y X course, this estimate ()es Y X , which depends on X , needs to be the “best” among a set of ossible estima criter has to be provided in order to cost function also called loss function is built as a function of both and its timat ion of optimality p tes. Therefore, a determine ()op Y X the “best” estimate of Y .(),es Y Y C Y A es e ()es Y X and a class of estimates, called Bayes estimates, is then derived by minimising the so-called Bayes criterion , defined as the expectation value (average value) of the cost function , and given by:()()()(),,,es es es Y Y Y y y px y dxdy =⎡⎤⎣⎦∫∫ C C R E (55),2X YThus, op Y s the value for which the Bayes criterion for a given cost function is minimum. The minimum value of this criterion is called Bayes risk .he figure reported below shows diff ()X i erent cost function.T3These two random variables may results from the observation of two random stochastic processes at two given instants.Mean Square Estimationthe cost function and the Bayes criterion are the For this important class of estimates,following:the optimality criterion consists then in minimising the Bayes criterio ean square value of the error (MS error).n n i.e. the m Nonlinear Mean Square EstimationWe would like to produce an estimatio ()es Y X of the random variable as aY deterministic function, say ()g , of the random i variable X . The problem can be formulated follo as ws: find ()g isuch that:((ms Y g X ⎡⎣))()()()22,e ,X Y y g x p x y dxdy +∞+∞−∞−∞⎤=−=−⋅⎦∫∫E (57)is minimum.o solve this question, let us start by considering the case where T ()g i is simply a constant . In this case, Eq. 57 reduces to: function i.e. ()g c ≡i 2es es Y Y if Y Y Δ⎧−−>⎪()()()[]2222mse 2Y Y c y c p y dyc Y c Y +∞−∞⎡⎤=−Which is a parabolic function of the parameter that reaches it m mum when=−⋅⎣⎦⎡⎤=−+∫E E E ⎣⎦mse0d dc= c ini []c Y =E that is . Thus the MS estimation of the random variable in terms of a constantvalue is given by the mean value of . ow, let Y YN us come back to Eq. 57. This later can be rewritten:()()()()()()()()()(),/2,/X Y Y X x X y g x p x y dxdyY g X X x +∞+∞−∞−∞=−∞−∞−⋅⎣⎦⎡⎤⎡⎤=−=⎢⎥⎣⎦⎣⎦∫∫∫∫E E ()()2y g x p y dy p x dx+∞+∞=−⋅22mse Y g X ⎡⎤=−=EAs the conditional mean square error ()()2/0Y g X X x ⎡⎤−=>⎣⎦E X x then the mse is minimum ifthis later expression is minimum. As = then the conditional mean square error is minimals in the case of a constant c ) if /g x Y X x y p y dy +∞−∞===⋅∫E . We note then[]()()(a /Y X x =()[]/op g X Y X = the function lead timation of therandom variable Y in terms of a deter tic function of the random variable E ing to the optimal estimation. The best es minis X is provided by the expected value of Y for given X values of .We can derive that:()()(())()()()()222min 0mse op op op op op X g X Y g X Y g X ⎤−=⎣⎦⎣⎦⎡⎤Y g X X Y g ⎡⎤⎡−=⎡⎤⎡⎤=−=−⎣⎦⎢⎥⎣⎦⎣⎦E E E E EThis means that when the optimum is reached, the estimation error is orthogonal to random variable used for the estimation.Let us now consider the case where we would like to find an optimal estimation of a n of n random Linear Mean Square Estimation: Orthogonality Principlerandom variable Y , in the mean square sense, in terms of a linear combinatio variables 12,,,N X X X …. That is we would like to find n constants 12,,a a …,N a such that21mse N n nn Y a X =⎡⎤=−⎢⎥⎢⎥⎣⎦∑E of the resulting error 1N n n n Y a X ε==−∑ ismean square valueminimum. is encountered in many situations and so m will be principle or Projectio The MS error mse is minimum if and only if the constants 12,,,N a a a … are such that the errorThis problem me of the discussed later. The solution is based on the following results known as the orthogonality n Theorem:1Nn n n Y a X ε==−∑ is orthogonal to 1,,,2N X X X :…{}**1,2,,:0Nkn k N X Y a ε⎡⎤1n k n X X =⎛⎞⎡⎤⋅=∀∈⋅=−⎢⎥⎜⎟⎣⎦⎝⎠⎣⎦∑…E E (58)Let us denote 1Nop n n n Y a ==∑12,,,N a aa … the solution of Eq. 58 and X the optimal MSE estimateof . The following results can be established: YThe above result can be rewritten by adopting vector formalism .In fact by setting and the mse is given by: ()12T N a a a =…A ()12TN X X X =…X and the orthogonality principle becomes:()0T Y ⎡⎤−=⎣⎦*A X X E (61)The optimal MSE solution (i.e. the optimal vector()1a a 2,,,TN a =…A ) is then derived by solving Eq. 61 as follows:。
stochastic process
Towards reliable software performance modelling usingstochastic process algebraNigel Thomas and Jeremy BradleyResearch Institute in Software Evolution,University of Durham, UKAbstractThe aim of any modelling exercise is to develop a better understanding ofthe system that is being studied, however it is unfortunately not always easyto understand abstract performance models or the metrics that are derivedfrom them. In this tutorial paper we focus on the facilitation of more reliableperformance prediction through better understanding of models andsolutions in the context of a stochastic process algebra. As well as beingconfident that the model is accurate, the designer must also have confidencein the measures derived. Partly this is a matter of ensuring that the correctmeasures are derived, but also that those measures are sufficiently boundedto be applicable.1. IntroductionIncreasingly, practitioners in mainstream software engineering are recognising the need for qualitative predictions of the services that their software systems provide. This is especially evident in the growing fields of WWW based applications and component based systems where there is a large amount of interaction between system elements and non-functional information, such as timing, which is crucial to usability. Traditional, post-development, code-based metrics are no longer adequate indicators of a products ability to maintain a market position and a wide range of analysis techniques typically need to be applied. Performance modelling provides one such class of evaluation techniques. At the same time it is recognised that future software development must not lead to large, inflexible, legacy type systems. A far greater emphasis has therefore been put on understanding the structure of existing complex software systems, applications in development and the impact of changes.A valuable set of visualisation techniques has been developed over recent years to support comprehension of such systems [31,32,45,46].Because of the properties of compositionality, parsimony and expressiveness, the possibility of including domain information and the ability to automatically derive performance measures, stochastic process algebra (SPA), such as PEPA [23], are an extremely good modelling paradigm for a wide variety of different domains. In addition, it has been successfully demonstrated that performance models can be automatically derived from design specifications in the Unified Modelling Language (UML) [28,29,30,33,35,37,38] and that SPA models can be related directly to code [18,39]. These developments have enabled performance models, and in particular SPA models, to be studied by software engineers with little or no experience of stochastic modelling.Figure 1.1: A typical performance modelling cycleFigure 1.1 shows a typical modelling cycle. The process starts with an elementary design, or system specification, which is then used to specify a performance model of the system. To facilitate easier solution, or in order to include additional features such as reward structures the model may be manipulated into an alternative form, which is then solved to derive performance measures. Since measures are insufficient in themselves to fully analyse the system, they must be applied, or interpreted, to facilitate the evaluation of the design and development of improvements. Clearly, it is vital that the designer has confidence in the measures that are derived in order that any changes made to the design are valid. Measures taken from the model must therefore be related to properties of the system rather than the model and must be accurate and predictable. In addition the designer must have confidence that the model which is solved accurately depicts the behaviour of the system which is being designed.By using an established design notation, such as UML, and automatically deriving a performance model, in SPA for example, the designer can have some confidence that there is a strong relationship between the model and the design. The model specification process, as presented by Pooley and King [37], relies on identifying the observable behaviour and interaction between objects in the design specification in order to identify the action sequences and synchronisation that need to be included in the model. In addition, because the objects in the design and the model are the same, the design notation may be used to explicitly define performance measures in terms of these objects. The quality of the measures derived is determined by the solution techniques employed, variance analysis allows the degree of reliability of measures to be predicted and variance reduction improves confidence of those measures [8].In the following section we cover a basic introduction to performance modelling with stochastic process algebra, aimed at experienced performance engineers with little or no previous knowledge of stochastic process algebra. We present several examples covering the informal translation of models from stochastic Petri nets and queueing networks to stochastic process algebra. In Section 3 visual techniques from program comprehension are applied to performance models to aid understanding and improve confidence that the model accurately reflects design. Some visual representations are also used to derive design views in UML to provide meaningful feedback in a modelling cycle.2. Stochastic Process AlgebraStochastic process algebra, extensions to classical process algebra such as CCS [34] and CSP [26], add several attractive features to performance modelling:• Compositionality • Formality • AbstractionFrom a specification point of view these features allow models to be constructed.• Concisely• With their meaning incorporated explicitly • In a way most natural to the modellerThey also allow complex models to be analysed and solved efficiently and automatically.A number of stochastic process algebra have been defined, among them• PEPA [23](University of Edinburgh, UK) • TIPP [19](University of Erlangen, Germany) • EMPA [4](University of Bologna, Italy) • SPADE (Imperial College, London, UK)Although there are semantic differences between these process algebra the modelling differences are not great, the exception being SPADE which allows generally distributed actions and model solution by simulation. In this paper we concentrate on PEPA, although most of our argument is clearly applicable the other algebra with some small modification.2.1. PEPAPEPA is a Markovian process algebra, meaning all events occur in response to exponentially distributed delays. Models are defined as the interaction of components which engage, singly or multiply in activities. Each component may be atomic or composed of other components.Each activity a def(α , r) has an action type α and is exponentially distributed with rate r or passive with distinguished rate . There are a small number of combinators which are used to define models.Constant : A defB used to assign names to components.Prefix : (α , r ).P the component engages in action α at rate r and subsequently behaves as P . Choice : P+Q the component behaves as either P or Q, the choice is determined by a race condition on the first action to complete.Hiding : P/L the actions in the set L are not visable outside P and cannot be shared. Co-operation: P Q the components proceed independently with any activities whose types do not occur in the cooperation set L . Activities with action types in the set L are only enabled in P Q when they are enabled in both P and Q . In PEPA the shared activity occurs at the rate of the slowest participant. If an activity has an unspecified rate in a component, the component is passive with respect to that action type.2.2. Queueing Network ModelsPEPA can be used to specify many models that can also be specified using other modelling paradigms. It is a simple matter to describe queues in PEPA using a state-based approach, it is also possible to use a job based approach. By constructing PEPA components of queues, large queueing network models can be formed relatively easily in PEPA, although some care is needed to ensure correct naming of shared actions.= = L LFigure 2.1 shows a state-based model of an M/M/1/N queue. The model is defined as the interaction of two components: the queue component and an arrival/service component. The queue component represents the number of jobs present in the queue and determines what arrival and service actions are possible. We define the queue to be passive and sharing actions arrival and service with the other component, S. When behaving as Queue0 no service is possible (obviously there are no jobs to serve) as the service is not present here and it is a shared action, in this instance the component S cannot fire the service action. Similarly the arrival action is blocked when the queue behaves as Queue N (the queue is full).Figure 2.1: An M/M/1/N queue in PEPAClearly it would be simple to model this system as a single component or split the arrival and service elements of S into separate components. Within this simple set up it is easy define interrupted services or Markov-modulated arrival processes. In Figure 2.2 the component S has been replaced by a two phase process. When behaving as S1 arrivals and services occur as previously, however in S0 the service process is blocked and the arrivals occur at a different rate. The behaviour is switched between the two by the actions sleep and wakeup, which are not shared with the queue.Figure 2.2: Service interruptions and variable arrival rateWhen considering models of multiple queues, where jobs may progress from one queue to the next, the service action in one component must act as an arrival action in another. Such a situation is illustrated in Figure 2.3. In PEPA this kind of consequence must be defined using the same named action in both components, which can be confusing (a renaming operation would be useful). For example, in Figure 2.3 the action service1 is used to mean both a departure from the first queue and an arrival at the second, they are clearly different ways of viewing the same event. In the second queue we have used two kinds of service event, leave causes the job to leave the system and feedback which triggers an arrival into the first queue (the counterpart to service1).A great many queueing behaviours can be modelled using PEPA (see [5,41,42]), although the use of a textual interface in the PEPA Workbench does not always make such models easy to define or interpret. To this ends a queueing interface for PEPA, called Dr PEPA, has been built by Jim Newton at the University of Durham. Dr PEPA accepts PEPA scripts and parses them to see if they conform to a known way of specifying a queue (only a state-based representation is handled). If the model parses successfully the tool draws a graphical representation of the model, using colour to depict the different components in both the script and the drawing. The tool also supports simple navigation mechanisms to locate componentsin the script or drawing and an edit an update function from the script to the drawing. As yet Dr PEPA does not support graphical editing of PEPA scripts. A sample screen shot from Dr PEPA is shown in Figure 2.4.Figure 2.3: Two M/M/1 queues in tandemFigure 2.4: "Dr PEPA" queueing model interface for PEPA2.3. Stochastic Petri NetsOf course, a graphical based formalism for performance modelling has been available for sometime in the form of stochastic Petri nets [1]. In general it is easy to informally translate models between stochastic process algebra and stochastic Petri nets, although there are no immediate actions in PEPA. A comparison between PEPA and GSPN has been made byDonatelli et al [14,15], who defined a set of Petri net equivalents to the basic PEPA constructs, shown in Figure 2.5.Figure 2.5: Basic constructs in PEPA and GSPN (From Donatelli et al [15])This equivalence equates GSPN, place and PEPA constants and GSPN transitions are PEPA actions. A more formal approach has been taken by Ribaudo [40], and Bernado et al in integrating tool support for EMPA and GSPN by defining EMPA in terms of GSPN semantics [2,3].Figure 2.6 shows a simple model of a processor using a resource. The processor does an initial think action before requiring to use the resource. Following completion of the use action the processor returns to thinking and the resource must go through an update phase before being available for use once more. If the think action completes before the corresponding update then the processor must wait. The PEPA and GSPN specifications shown give rise to models with identical Markov chains.Figure 2.6: A simple PEPA model of resource usage with an equivalent GSPN representationA slightly more complex example is given in Figure 2.7. Here the processor must get andsubsequently release a lock on two resources, a Bus and Memory, following release the tworesources are immediately available once again. Of course, there is little point in locking aresource if only one process has access to it. Figure 2.8 shows an extension to the previousmodel with two competing processes.Figure 2.7: PEPA model of resource locking with an equivalent GSPN representationFigure 2.8: Multiple instances vs. multiple tokensOn first inspection the two methods for extending the model, adding an extra token in GSPN and adding a second instance of the processor component, appear to be equivalent. However, the two methods do not give rise to identical models. The distinction can be seen if we consider the firing of a think action/transition. In the GSPN model a token moves from P1 toP2, whichever token moves the subsequent markings are identical. In the PEPA modelhowever, either the first instance processor performs the action or the second instance does, leading to two distinct successors:(P2||P1) BusMemory and,(P1||P2) Bus MemoryIn practice most practical performance measures will not distinguish between these two cases, however, it should be noted that they are not identical.3. Understanding performance modelsThere are several reasons for a system designer not to trust the results from a performance model:• Don’t understand the model behaviour. • Measures don’t relate to the design. • Wrong metrics.• Wrong method of solution (not always obvious!). • Incomplete information.These problems generally arise because the modelling and design tasks are divorced and the system designer only sees the results of the modelling study and does not have in depth knowledge of stochastic modelling. An over reliance on automated methods can mean that a non-expert modeller solves a model that has been over simplified or with inappropriate assumptions made.In the analysis of a PEPA model a derivation graph is formed in order to compute the states of the underlying Markov chain. This graph is not presented explicitly to the modeller by the PEPA Workbench tools [11,17], rather it is an important mechanism used to study the PEPA model and derive a numerical solution. This type of graph structure is extremely similar to call graphs used in program comprehension and as such the same set of tools can be used to visualise them. Figure 3.1 shows the PEPA specification for a shared resource system, adapted from Hillston [23], and Figure 3.2 shows its derivation graph.RP P R r update (use R P r task r use P use }{321)||().,).(, ).,).(,( T defdefFigure 3.1: PEPA specification of a shared resource model.Figure 3.2: A coloured derivation graphThe graph is enhanced using colour to show what agents are participating in the actions and are subsequently evolved. In the case of shared activities multiple coloured arrows indicate the participating agents, filled head arrows indicate active actions and open headed arrows indicate passive actions. Clearly this is a small model and it is possible to view the entire model and derivation graph without difficulty, however, the inclusion of the graph and the addition of colour add to the ease by which non-experts may understand the evolution of this model. Given a larger state space the size of the full derivation graph in this form quickly becomes unmanageable. Several possibilities exist for handling problems of scale in static 2D representations, briefly these may be summarised as:• Elimination. Nodes with only one subsequent action may be removed and actions combined.• Aggregation. Similar nodes, or nodes relating only to internal activities of agents, are combined.• Decomposition. Each individual component is viewed in isolation, although it behaves in the same way as in the full model.• Highlighting. Although the entire graph is displayed, certain arcs and nodes are emphasised (such as those pertaining to the evolution of a particular component).Highlighting can be used to show sequences of independent behaviour.• Layering. The entire graph is viewed in simple form with gross aggregation of nodes.Aggregated nodes may be selected for expansion within the full graph or as a separate graph.• Abstraction. The entire graph is viewed in simple form with gross aggregation of nodes;the detailed behaviour within the aggregated nodes may be shown in miniature, as if from a distance.• Windowing. The entire graph is displayed within a sliding window, a miniature representation of the graph may be used to show the position of the window to aid navigation.Using the above example there is little scope for elimination, except for removing the node P||P R' and compounding the subsequent update preceding task actions to give two task+update arcs to P||P R from P'||P R' and P||P' R' respectively. There is scope for aggregation by exploiting the symmetry of the graph.Figure 3.3: Decomposed view and highlighted derivation graph for resource component, R.Figure 3.3 shows the resource component in isolation and highlighted within the derivation graph. Within this simple example there is only a limited benefit from such representations over the coloured derivation graph. The alternate actions, use followed by update, are clearly evident in the highlighted derivation graph, showing that the resource component in the model behaves as expected. In larger models, with more complex components, sequences of independent actions can be highlightedThus far we have presented information in a solution oriented manner, that is, with nodes being equivalent to states in the underlying Markov process. Nodes are the primary conceptual foci of graph representations and so in viewing a derivation graph, the user places state as the principal model object. However, designers generally have little or no concept of state, but rather tend to consider software systems as collections of functions or activities performed according to certain ordering constraints. The translation from the state based derivation graph to a functional, or activity oriented, view is a simple matter of exchanging nodes and arcs, so that the nodes represent actions and the arcs represent pre- and post-conditions. Alternatively we could present such a view using call stack representations which have been used successfully in program visualisation [46].In the discussion above we have proposed representations from performance and behavioural viewpoints that aid understanding of the evolution of the entire model. We now concentrate on views of the high level objects and the interfaces that are formed between them by synchronised actions. Visualisations such as these allow the designer to observe that the objects in the model correspond to objects in the design and that the way in which those objects interact is similarly represented. At this level of abstraction it is not necessary for us to present the detailed sequences of behaviour that lead to these interactions, rather how the interaction is facilitated (the shared action) and possibly any pre-conditions. Much of thenecessary information for this is contained in statements containing the cooperationFigure 3.4: PEPA model element of a workcell specificationRobot, Belt, Table, Dbelt, Crane and Press are the initial agents of each of the five components representing a robot arm, a feed belt, an elevating table, a deposit belt, a lifting crane and a press respectively. Parsing this statement it is easy to deduce that Belt (the feed belt) interfaces with Table (elevating table) and Dbelt (deposit belt) interfaces with Crane (the crane). Furthermore, each of these subsystems operates without direct interface to each other or with Press, but all components possibly interface with the robot. Working only from this model component it is not possible to deduce what actions the robot shares with each of the other individual components. This problem may be overcome by also parsing the component specifications to determine what components participate in which actions. The resultant representation is shown in Figure 3.5; the complexity of the components indicated by the size of the circles representing them. We have also animated this view to show model evolution along different paths.Figure 3.5: Component Interface viewAs well as providing a useful high level representation in its own right, the visualisation of component interfaces also provides a useful navigation mechanism to support more detailed behaviour. Figure 3.6 shows a prototype navigation tool written in HTML using Holton's production cell example [27].Figure 3.6: A prototype navigation toolThere are three frames, one contains the visual representation of the component interfaces, one contains the entire PEPA specification and the third contains a derivation graph view of a single component. The different representations are colour coordinated so that the user can see what component is in view by the colour it is displayed in. The interface diagram acts as a map; clicking on a component changes the PEPA script to the corresponding position. This prototype tool demonstrates how even very simple graphical representations can collectively give a far greater aid to comprehension.4. Deriving UML from SPA modelsBecause stochastic process algebra, such as PEPA, are formally defined and have formal notions of equivalence it is possible to define transformations which automatically translate one specification to one or more others in a provably correct manner. However, the majority of the motivations for performing such transformations and some of the notions of equivalence used relate more to the state space of the solution than the behaviour of the model as related to the design [24]. It is therefore necessary for the designer to be able to have confidence that the altered model still relates to the design in an understandable way from a design perspective. An obvious solution to this problem might be to derive a UML specification directly from the altered SPA specification, the reverse of the process described by Pooley and King [37].We consider three of the many modelling mechanisms used in UML, state charts, collaboration diagrams and sequence diagrams. State charts show the states and transitions of components. They record dependencies between the state of a component and its reaction to messages. Clearly there is a close relation to decomposed derivation graphs. Figure 3.8 shows UML state charts for the components of a resource use model specified in Figure 3.1.Figure 3.7: Statechart of a simple resource use modelThe dependencies between the components over the shared use action are depicted by introducing an extra Boolean condition, avail. This condition provides the simple blocking mechanism on the shared action when it is not defined. The concurrent nature of the components is not explicitly stated and it is not clear whether the competition between instances of P is represented.Collaboration diagrams record the links between components and their interactions. Clearly there is a close relation to the individual component interface view. Figure 3.9 shows a collaboration diagram for the model of resource use specified in Figure 3.1 and Figure 3.10shows a collaboration diagram of Holton's model of a production workcell.Figure 3.9: Collaboration diagram of a simple resource use modelIt is worth comparing Figure 3.10 with the interface view of the same model shown in Figure3.5. The interface view incorporates some additional features which are not available in UML, principally the use of colour, which facilitates comprehension, and the size of the circles used to indicate the complexity of components. The advantage of the UML diagram is that it is a standard which is more likely to be familiar to designers.Figure 3.10: Collaboration diagram of Holton's workcell modelSequence diagrams record necessary sequences of interactions between components and offer a similar functionality to animated component views. Figure 3.11 shows a sequence diagram of Holton's workcell model and Figure 3.12 shows a sequence diagram for the resource use model. Both diagrams show only partial evolution of the models.Figure 3.11: Sequence diagram of Holton's workcell modelThe lines below each of the component names show time evolution, the boxes on those lines show when the component is activated. The colour in the box indicates that the component is(or at least has the potential of) actively participating in actions, whereas a clear box indicates that the component may only be passively participating in actions. Where there is no box (only the time line) the component is in a blocked state.Figure 3.12: Sequence diagram of a resource use modelThe difficulty in using sequence diagrams comes in adequately representing the synchronisation of shared actions. The approach we have used is to show an initialisation of a shared action as a two-way communication between the participating components (double headed arrow with dot-dashed line) and the end of a shared action as a one-way communication from the active to the passive partner. In the resource use model the is competition between the two instances of P, both of these are allowed to initialise their use actions, but obviously one will complete first causing the resource to become unavailable. This will block the other instance of P, so clearly there needs to be some communication from R to P to cease the use (this is shown as a single headed dot-dashed arrow).The models for synchronisation in PEPA and the other stochastic process algebra have been subject for some debate during their development [25,7], the ultimate choice being made principally according to meaning in a stochastic model rather than any particular real world interpretation. Other models of synchronisation, such as synchronisation on points, as appear in message passing systems, rather than on shared actions, would have an easier interpretation in sequence diagrams.4. Concluding remarksThis paper aims to present a position rather than a solution. Presented here are a set of ideas that head towards the goal of making performance models, in particular SPA models, more usable for the non-modelling expert. We are still a long way from our goal and much work remains to be done to develop the notions we have presented here, however we are creating a direction that is reducing the concept gap between modellers and designers. In particular we are aiming at presenting models and model solutions from a component perspective that the designer can identify with, rather than a global state space view so often favoured by modellers.。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Chapter 2
Conditional Expectation
2.1 A Binomial Model for Stock
Price Dynamics
n Figure:A three coin period binomial model.
n Note that the following notation:
1. Sample space
2. is stock price at time k
2.2 Information
n Definition 2.1 (Sets determined by the first k tosses) We say that a set A is determined by the first k coin tosses if, knowing only the outcome of the first k tosses, we can decide whether the outcome of all tosses is in A.
n Note that
n1. is the collection of set determined by the first k tosses
n2.
n3. the random variable is -measurable, for each k=1,2,…n
n Example 2.1
n
n Definition 2.2 (Information carried by a random variable.) Let X be a random variable We say that a set is determined by the random variable X if, knowing only the value of the random variable, we can decide whether or not . Another way of saying this is that for
every , either
n.
n Note that
n 1. The collection of subsets of determined by X is a algebra, denote by
n 2. If the random variable X takes finitely many different values, then is generated by the
collection of sets
n these sets are called the atoms of the algebra .
n 3.if X is a random variable then is given by
n Example 2.2
2.3 Conditional Expectation
n Definition 2.3 (Expectation.)
n And
n We can think of as a partial average of X over the set A.
n2.3.1 An example
n Let us estimate , given . Denote the estimate by .
n is a random variable Y whose value at is defined by
n where .
n Properties of n.
n.
n.
n.
n We then take a weighted average:
n Furthermore,
n In conclusion, we can write n Where
n2.3.2 Definition of Conditional Expectation
n Existence. There is always a random variable Y satisfying the above properties (provided that
i.e., conditional expectations always exist.
n Uniqueness. There can be more than one random variable Y satisfying the above properties, but if is another one, then almost surely, i.e.,
n Notation 2.1 For random variables X, Y , it is standard notation to write
n.
n.
n2.3.3 Further discussion of Partial Averaging
n.
n2.3.4 Properties of Conditional Expectation
n We compute
n We can also write
n A similar argument shows that
n We can verify the Tower Property,
2.4 Martingales
n The ingredients are:
n A super martingale
n A sub martingale
n A Martingale
n Example 2.3 (Example from the binomial model.)
n For k = 1;2 we already showed that
n The right hand side is , and so we have。