对偶理论
合集下载
相关主题
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
T
.
Since this is an optimal tableau we know that c − AT y ≤ 0, −y T ≤ 0
with y T b equal to optimal value in the primal problem. But then AT y ≥ c and 0 ≤ y so that y is feasible for the dual problem D. In addition, the Weak Duality Theorem implies that bT y = maximize cT x ≤ bT y subject to Ax ≤ b, 0 ≤ x for every vector y that is feasible for D. Therefore, y solves D!!!! This is an amazing fact! Our method for solving the primal problem P , the simplex algorithm, simultaneously solves the dual problem D! This fact will be of enormous practical value when we study sensitivity analysis.
x∈ R
1
This problem has a finite optimal value, namely zero; however, this value is not attained by any point x ∈ I R. That is, it has a finite optimal value, but a solution does not exist. The existence of solutions when the optimal value is finite is one of the many special properties of linear programs. Proof: Since the dual of the dual is the primal, we may as well assume that the primal has a finite optimal value. In this case, the Fundamental Theorem of Linear Programming says that an optimal basic feasible solution exists. By our formula for the general form of simplex tableaus (??), we know that there exists a nonsingular record matrix R ∈ I Rn×n and m a vector y ∈ I R such that the optimal tableau has the form R 0 −y T 1 A I b cT 0 0 = RA R Rb T T c − y A −y −y T b
or both for i = 1, . . . , m.
The two observations (1.2) and (1.3) combine to yield the following theorem. Theorem 1.8 (The Complementary Slackness Theorem) The vector x ∈ Rn solves P and the vector y ∈ Rm solves D if and only if x is feasible for P and y is feasible for D and
1.1
Complementary Slackness
The Strong Duality Theorem tells us that optimality is equivalent to equality in the Weak Duality Theorem. That is, x solves P and y solves D if and only if (x, y ) is a P − D feasible pair and cT x = y T Ax = bT y. We now carefully examine the consequences of this equivalence. Note that the equation cT x = y T Ax implies that
The problem on the right is in standard form so we can take its dual to get the LP minimize subject to (−c)T x (−AT )T x ≥ (−b), 0 ≤ x = maximize subject to cT x Ax ≤ b, 0 ≤ x .
Since the problem D is a linear program, it too has a dual. The duality terminology suggests that the problems P and D come as a pair implying that the dual to D should be P . This is indeed the case as we now show: minimize subject to bT y AT y ≥ c, 0≤y = −maximize subject to (−b)T y (−AT )y ≤ (−c), 0 ≤ y.
1
Duality Theory
maximize subject to cT x Ax ≤ b, 0 ≤ x
Recall from Section 1 that the dual to an LP in standard form (P ) is the LP (D ) minimize subject to bT y AT y ≥ c, 0 ≤ y.
or
i=1
aij yi = cj
or both for j = 1, . . . , n.
Similarly, (??) implies that
m n
0 = y T (b − Ax) =
i=1
yi (bi −
j =1
aij xj ).
Again, feasibility implies that
n
n m
(1.1)
0 = x (A y − c) =
j =1
T
T
xj (
i=1
aij yi − cj ).
In addition, feasibility implies that
m
0 ≤ xj
and
0≤
i=1
aij yi − cj 2பைடு நூலகம்
for j = 1, . . . , n,
and so xj (
m
(i) either 0 = xj or
i=1 n
aij yi = cj or both for j = 1, . . . , n, and aij xj = bi or both for i = 1, . . . , m.
j =1
(ii) either 0 = yi or
3
Proof: If x solves P and y solves D, then by the Strong Duality Theorem we have equality in the Weak Duality Theorem. But we have just observed that this implies (1.2) and (1.3) which are equivalent to (i) and (ii) above. Conversely, if (i) and (ii) are satisfied, then we get equality in the Weak Duality Theorem. Therefore, by Theorem 1.2, x solves P and y solves D. The Complementary Slackness Theorem can be used to develop a test of optimality for a putative solution to P (or D). We state this test as a corollary. Corollary 1.1 The vector x ∈ Rn solves P if and only if x is feasible for P and there exists a vector y ∈ Rm feasible for D and such that
The primal-dual pair of LPs P − D are related via the Weak Duality Theorem. Theorem 1.1 (Weak Duality Theorem) If x ∈ Rn is feasible for P and y ∈ Rm is feasible for D, then cT x ≤ y T Ax ≤ bT y. Thus, if P is unbounded, then D is necessarily infeasible, and if D is unbounded, then P is necessarily infeasible. Moreover, if cT x ¯ = bT y ¯ with x ¯ feasible for P and y ¯ feasible for D, then x ¯ must solve P and y ¯ must solve D. We now use The Weak Duality Theorem in conjunction with The Fundamental Theorem of Linear Programming to prove the Strong Duality Theorem. The key ingredient in this proof is the general form for simplex tableaus derived at the end of Section 2 in (??). Theorem 1.2 (The Strong Duality Theorem) If either P or D has a finite optimal value, then so does the other, the optimal values coincide, and optimal solutions to both P and D exist. Remark: This result states that the finiteness of the optimal value implies the existence of a solution. This is not always the case for nonlinear optimization problems. Indeed, consider the problem min ex .
0 ≤ yi Thus, we must have
and
0 ≤ bi −
j =1
aij xj
for i = 1, . . . , m.
n
y i ( bi −
j =1
aij xj ) = 0
for
j = 1, . . . , n,
or equivalently,
n
(1.3)
yi = 0
or
j =1
aij xj = bi
m
aij yi − cj ) ≥ 0
i=1
for
j = 1, . . . , n.
Hence, the only way (1.1) can hold is if
m
xj (
i=1
aij yi − cj ) = 0
for
j = 1, . . . , n.
or equivalently,
m
(1.2)
xj = 0
.
Since this is an optimal tableau we know that c − AT y ≤ 0, −y T ≤ 0
with y T b equal to optimal value in the primal problem. But then AT y ≥ c and 0 ≤ y so that y is feasible for the dual problem D. In addition, the Weak Duality Theorem implies that bT y = maximize cT x ≤ bT y subject to Ax ≤ b, 0 ≤ x for every vector y that is feasible for D. Therefore, y solves D!!!! This is an amazing fact! Our method for solving the primal problem P , the simplex algorithm, simultaneously solves the dual problem D! This fact will be of enormous practical value when we study sensitivity analysis.
x∈ R
1
This problem has a finite optimal value, namely zero; however, this value is not attained by any point x ∈ I R. That is, it has a finite optimal value, but a solution does not exist. The existence of solutions when the optimal value is finite is one of the many special properties of linear programs. Proof: Since the dual of the dual is the primal, we may as well assume that the primal has a finite optimal value. In this case, the Fundamental Theorem of Linear Programming says that an optimal basic feasible solution exists. By our formula for the general form of simplex tableaus (??), we know that there exists a nonsingular record matrix R ∈ I Rn×n and m a vector y ∈ I R such that the optimal tableau has the form R 0 −y T 1 A I b cT 0 0 = RA R Rb T T c − y A −y −y T b
or both for i = 1, . . . , m.
The two observations (1.2) and (1.3) combine to yield the following theorem. Theorem 1.8 (The Complementary Slackness Theorem) The vector x ∈ Rn solves P and the vector y ∈ Rm solves D if and only if x is feasible for P and y is feasible for D and
1.1
Complementary Slackness
The Strong Duality Theorem tells us that optimality is equivalent to equality in the Weak Duality Theorem. That is, x solves P and y solves D if and only if (x, y ) is a P − D feasible pair and cT x = y T Ax = bT y. We now carefully examine the consequences of this equivalence. Note that the equation cT x = y T Ax implies that
The problem on the right is in standard form so we can take its dual to get the LP minimize subject to (−c)T x (−AT )T x ≥ (−b), 0 ≤ x = maximize subject to cT x Ax ≤ b, 0 ≤ x .
Since the problem D is a linear program, it too has a dual. The duality terminology suggests that the problems P and D come as a pair implying that the dual to D should be P . This is indeed the case as we now show: minimize subject to bT y AT y ≥ c, 0≤y = −maximize subject to (−b)T y (−AT )y ≤ (−c), 0 ≤ y.
1
Duality Theory
maximize subject to cT x Ax ≤ b, 0 ≤ x
Recall from Section 1 that the dual to an LP in standard form (P ) is the LP (D ) minimize subject to bT y AT y ≥ c, 0 ≤ y.
or
i=1
aij yi = cj
or both for j = 1, . . . , n.
Similarly, (??) implies that
m n
0 = y T (b − Ax) =
i=1
yi (bi −
j =1
aij xj ).
Again, feasibility implies that
n
n m
(1.1)
0 = x (A y − c) =
j =1
T
T
xj (
i=1
aij yi − cj ).
In addition, feasibility implies that
m
0 ≤ xj
and
0≤
i=1
aij yi − cj 2பைடு நூலகம்
for j = 1, . . . , n,
and so xj (
m
(i) either 0 = xj or
i=1 n
aij yi = cj or both for j = 1, . . . , n, and aij xj = bi or both for i = 1, . . . , m.
j =1
(ii) either 0 = yi or
3
Proof: If x solves P and y solves D, then by the Strong Duality Theorem we have equality in the Weak Duality Theorem. But we have just observed that this implies (1.2) and (1.3) which are equivalent to (i) and (ii) above. Conversely, if (i) and (ii) are satisfied, then we get equality in the Weak Duality Theorem. Therefore, by Theorem 1.2, x solves P and y solves D. The Complementary Slackness Theorem can be used to develop a test of optimality for a putative solution to P (or D). We state this test as a corollary. Corollary 1.1 The vector x ∈ Rn solves P if and only if x is feasible for P and there exists a vector y ∈ Rm feasible for D and such that
The primal-dual pair of LPs P − D are related via the Weak Duality Theorem. Theorem 1.1 (Weak Duality Theorem) If x ∈ Rn is feasible for P and y ∈ Rm is feasible for D, then cT x ≤ y T Ax ≤ bT y. Thus, if P is unbounded, then D is necessarily infeasible, and if D is unbounded, then P is necessarily infeasible. Moreover, if cT x ¯ = bT y ¯ with x ¯ feasible for P and y ¯ feasible for D, then x ¯ must solve P and y ¯ must solve D. We now use The Weak Duality Theorem in conjunction with The Fundamental Theorem of Linear Programming to prove the Strong Duality Theorem. The key ingredient in this proof is the general form for simplex tableaus derived at the end of Section 2 in (??). Theorem 1.2 (The Strong Duality Theorem) If either P or D has a finite optimal value, then so does the other, the optimal values coincide, and optimal solutions to both P and D exist. Remark: This result states that the finiteness of the optimal value implies the existence of a solution. This is not always the case for nonlinear optimization problems. Indeed, consider the problem min ex .
0 ≤ yi Thus, we must have
and
0 ≤ bi −
j =1
aij xj
for i = 1, . . . , m.
n
y i ( bi −
j =1
aij xj ) = 0
for
j = 1, . . . , n,
or equivalently,
n
(1.3)
yi = 0
or
j =1
aij xj = bi
m
aij yi − cj ) ≥ 0
i=1
for
j = 1, . . . , n.
Hence, the only way (1.1) can hold is if
m
xj (
i=1
aij yi − cj ) = 0
for
j = 1, . . . , n.
or equivalently,
m
(1.2)
xj = 0