信号检测与估计2017期中试题及答案
合集下载
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
xne−(1+a)x
∞ 0
xne−(1+a)xdx
Then, let us calculate the denominator.
∞
xne−(1+a)xdx = −
∞
xn
1
d(e−(1+a)x)
0
0 1+a
1
=−
×
1+a
xne−(1+a)x|∞ 0 −
n =
1+a ...
∞
e−(1+a)xxn−1dx
Firstly, we get the log-likelihood function as
ln f (X|N ) = (N + 1) ln(1 + a) + N ln x − (1 + a)x − ln(N !)
(18)
Let
∂ ln f (X|N ) ∂x
=
0,
来自百度文库
we
can
derive
the
estimate
∂ ln f (X|N ) =0
∂x
N − (1 + a) = 0
(19)
x
Xˆ = N 1+a
4
Problem 3: (a): We know that the probability of variable Z is
P (Z = k; p) =
n k
pk(1 − p)n−k
(20)
=
n! pk(1 − p)n−k
1
min(1,z)
=
xdx
min(1, z) − max(z − 1, 0) max(z−1,0)
=
min(1, z)
−
1 max(z
−
1, 0)
1 2
x2|mmianx((1z,−z)1,0)
(12)
min(1, z) + max(z − 1, 0) =
2
=
1+z−1 2
z 2
1≤z≤2 z =
0≤z≤1 2
y=0
= n × (n − 1) × p2(p + (1 − p))n−2
= n(n − 1)p2
Then, E [Z2] = n(n − 1)p2 + np.
6
Problem 4: (a): Since the random variables Yk are independent, then we can derive the N observations’ joint probability function as
is
1 12
.
XˆLMMSE = E[X] + CX−1X + HT Cw−1H −1 HT Cw−1
1 Z − 2 − HE[X]
(2)
1
1
11 Z
=+
× 12 × (Z − − ) =
2 12 + 12
22 2
And its corresponding mean-square error is
E[(X − Xˆ )2] =
is its Z-
transform.
x[n] = Z−1
pN
= pN Z−1
1
(31)
(1 − qz−1)N
(1 − qz−1)N
7
We denote Xk(z) =
, 1
(1−qz−1)k
and
corresponding
inverse
Z-transform
is
xk[n].
For sim-
plicity, we leave out u[n] in following process.
信号检测与估计2017期中试题及答案
Midterm Solution EE251: Signal Detection and Parameter Estimation
2016 Fall
Problem 1: (a):
• Method 1: If we want to use LMMSE to derive the estimate XˆLMMSE, we need let Y be a noise in this model. According to the Bayesian Gauss-Markov Theorem, the noise need be
Therefore,
XˆMMSE
=
z 2
and
its
corresponding
MSE
is
1 24
.
3
Problem 2: (a): We can calculate the posterior density as
f (X|N = n) = =
P (N = n|X)fX(x)
∞ 0
P
(N
=
n|X )fX (x)dx
Figure 1: The pdf of Z.
If you want to discuss the value of z here, you can derive two cases of f (X|Z) as follows:
2
Figure 2: The pdf of X given Z for 0 ≤ z ≤ 1.
=x
dx
0
N!
(1 + a)N+1 =
∞
xN +1 e−(1+a)x dx
N!
0
(1 + a)N+1 (N + 1)!
=
×
N!
(1 + a)N+2
(16)
N +1 =
a+1
(c): The MAP estimate of X is given by
Xˆ = arg max f (X|N )
(17)
X
(9)
Then, we need derive the f (X|Z) first.
f (X) = u(x)u(1 − x) (10)
f (Z|X) = u(z − x)u(x + 1 − z)
Now, we can calculate f (X|Z) as
f (Z|X)f (X) f (X|Z) =
CX−1X + HT Cw−1H
−1 =
1
1 =
12 + 12 24
(3)
• Method 2:
E [X|Z] = E[X] + CXZCZ−Z1 (Z − E[Z])
(4)
We need calculate CXZ and CZ−Z1 .
CXZ = E [(X − E[X])(Z − E[Z])]
E [pˆML]
=
E [Z] n
=
np n
=
p
(23)
So, the ML estimate is unbiased. And then we calculate the MSE:
E
(pˆML − p)2
= E [Z2] − p2 = var(Z) + E2 [Z] − p2
n2
n2
=
np(1
− p) + n2
=
E[X ]
+ CXZ CZ−Z1 (Z
− E[Z])
=
1 2
+
1 12
× 6 × (Z
− 1)
=
Z 2
(7)
1
And its corresponding mean-square error is
E[(X
− Xˆ )2]
=
CXX
−
CXZ CZ−Z1 CZX
=
1 24
(8)
(b):
XˆMMSE = E[X|Z]
zero-mean. Therefore, we set a new model as
1
1
Z − = X + (Y − ) = X + w
(1)
2
2
Since the X
and Y
follows uniform distribution, then their mean is
1 2
and variance
f (Z|X)f (X)dX
u(x)u(1 − x)u(z − x)u(x + 1 − z)
=
min(1,z) max(z−1,0)
1dx
(11)
u(x)u(1 − x)u(z − x)u(x + 1 − z) =
min(1, z) − max(z − 1, 0)
The denominator is the pdf of Z.
0
∞
e−(1+a)xnxn−1dx
0
n! =
(1 + a)n+1
(13) (14)
Therefore, the posterior density is
(1 + a)n+1xne−(1+a)x
f (X|N = n) =
(15)
n!
(b):
Xˆ = E [X|N ]
∞ (1 + a)N+1xN e−(1+a)x
k!(n − k)!
The log-likelihood function L(Z; p) is given as
n!
L(Z; p) = ln P (Z; p) = ln
+ Z × ln p + (n − Z) × ln(1 − p) (21)
Z!(n − Z)!
Since we want to derive the maximum likelihood estimate of p, we solve the equation
generating function, we can write GS(z) as
GS(z) = E ezS
= E z−S
∞
(30)
= P (S = n)z−n
n=0
If we think P (S
=
n) is a signal series x[n] in time domain, then
pN (1−qz−1)N
5
E Z2 − Z
=
n
k × (k − 1) × n! pk(1 − p)n−k k!(n − k)!
k=0
n
=
n!
pk(1 − p)n−k
(k − 2)!(n − k)!
k=2
(26)
y==k−2 n−2 n × (n − 1) × (n − 2)! py+2(1 − p)n−2−y
y!(n − 2 − k)!
calculated as follows:
GS(z) = E zS
N
=E
zYk
k=1
N
(=∗)
E zYk
(28)
k=1
= (GY (z))N
pN =
(1 − qez)N
The equation (∗) depends on the independence of the Yk. If you are not sure for this conclusion, you can also use the property of conditional expectation E [A] = E [E [A|B]], where B can be any condition.
Figure 3: The pdf of X given Z for 1 ≤ z ≤ 2.
Lastly, we derive the estimate XˆMMSE.
E[X|Z] = xf (x|z)dx
xu(x)u(1 − x)u(z − x)u(x + 1 − z)
=
dx
min(1, z) − max(z − 1, 0)
N
N
E
zYk = E E
zYk |Y1, Y2, · · · , YN−1
k=1
k=1
N −1
=E
zYk E zYN
k=1
N −1
= E zYN × E
zYk
k=1
...
(29)
N
= E zYk
k=1
We
let
ez
→
z−1,
the
GS (z )
changes
to
. pN
(1−qz−1)N
According to the definition of
∂ L(Z ;p) ∂p
=
0
to
get
the
estimate
pˆML.
∂L(Z; p) =0
∂p
Z n−Z
−
=0
(22)
p 1−p
Z pˆML = n
(b): If we use the conclusion of binomial distribution E [Z] = np and var(Z) = np(1 − p), we can solve the problem easily.
n2p2
−
p2
=
p(1 − p) n
(24)
However, if you do not know the conclusion of binomial distribution, you can also derive the result by yourself.
E [Z] =
n
k × n! pk(1 − p)n−k k!(n − k)!
N
P (Y1, Y2, · · · , YN ; p) = P (Yk; p)
k=1
= pN (1 − p)
N k=1
Yk
(27)
Then, S is a sufficient statistic for estimating p. The generating function of S can be
1
=E
(X − )(X + Y − 1) 2
(5)
=E
X2 + XY
−X
−
1 X
−
1 Y
+
1
2 22
1 =
12
CZZ = E (Z − E[Z])2
=E
X2 + 2XY + Y 2 − 2Z + 1
1 =
6
(6)
Then, we can derive the LMMSE as
XˆLMMSE
=
E [X|Z]
k=0
n
=
n! pk(1 − p)n−k
k!(n − k)!
k=1
n−1
y==k−1
n × (n − 1)! py+1(1 − p)n−1−y
y!(n − 1 − y)!
y=0
(25)
n−1
= np ×
(n − 1)! py(1 − p)n−1−y
y!(n − 1 − y)!
y=0
= np × (p + (1 − p))n−1 = np