密码学与网络安全-附录F

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

615
for70220_appF.fm Page 616 Saturday, January 27, 2007 1:09 PM
Fra Baidu bibliotek
616
APPENDIX F
INFORMATION THEORY
F.2 ENTROPY
Assume that S is a finite probability sample space (See Appendix D). The entropy or uncertainty of S is defined as
Assume that we roll a six-sided fair die. The entropy of the experiment is H(S) = log2 6 ≈ 2.58 bits
Minimum Entropy
It can be proven that for a particular probability sample space with n possible outcomes, minimum entropy is obtained when only one of the outcomes occurs all the time. In this case, the minimum entropy is
Example F.2
Imagine a person has bought a lottery ticket. If a friend calls to tell her that she has won first prize, does this message (event) contain any information? It does. The message contains a lot of information, because the probability of winning first prize is very small. The receiver of the message is totally surprised.
Interpretation of Entropy
Entropy can be thought of as the number of bits needed to represent each outcome of a probability sample space when the outcomes are equally probable. For example, when a probability sample space has eight possible outcomes, each outcome can be represented as three bits (000 to 111). When we receive the result of the experiment, we can say that we have received 3 bits of information. The entropy of this probability sample space is also 3 bits (log2 8 = 3).
Assume that we toss a fair coin. The outcomes are heads and tails, each with a probability of 1/2. This means H(S) = P(heads) × [log2 1/(P(heads))] + P(tails) × [log2 1/(P(tails))] H(S) = (1/2) × [log2 1/(1/2)] + (1/2) × [log2 1/(1/2)] = 1 bit This example shows that the result of flipping a fair coin gives us 1 bit of information (uncertainty). In each flipping, we don’t know what the outcome will be; the two possibilities are equally likely.
Example F.4
Assume that we toss a nonfair coin. The outcomes are heads and tails, with P(heads) = 3/4 and P(tails) = 1/4. This means H(S) = (3/4) × [log2 1/(3/4)] + (1/4) × [log2 1/(1/4)] ≈ 0.8 bit This example shows that the result of flipping a nonfair coin gives us only 0.8 bit of information (uncertainty). The amount of information here is less than the amount of information in Example F.3, because we are expecting to get heads most of the time; we are surprised only when we get tails.
Joint Entropy
When we have two probability sample spaces, S1 and S2, we can define the joint entropy H(S1, S2) as
H(S1, S2) =
ΣΣ P (x, y) × [log2 1/P (x, y)]
Maximum Entropy
It can be proven that for a particular probability sample space with n possible outcomes, maximum entropy can be achieved only if all the probabilities are the same (all outcomes are equally likely). In this case, the maximum entropy is
Hmax(S) = log2 n bits
for70220_appF.fm Page 617 Saturday, January 27, 2007 1:09 PM
SECTION F.2
ENTROPY
617
In other words, the entropy of every probability sample space has an upper limit defined by this formula. Example F.6
F.1 MEASURING INFORMATION
How can we measure the information in an event? How much information does an event carry? Let us answer these questions through examples. Example F.1
Example F.5
Now assume that we toss a totally nonfair coin, in which the outcome is always heads, P(heads) = 1 and P(tails) = 0. The entropy in this case is H(S) = (1) × [log2 1/(1)] + (0) × [log2 1/(0)] = (1) × (0) + (0) = 0 There is no information (uncertainty) in this experiment. We know that the outcome will always be heads; the entropy is 0.
Imagine a person sitting in a room. Looking out the window, she can clearly see that the sun is shining. If at this moment she receives a call (an event) from a neighbor saying, “It is now daytime,” does this message contain any information? It does not. She is already certain that it is daytime. The message does not remove any uncertainty in her mind.
H(S) = Σ P(s) × [log2 1/P(s)] bits
where s ∈ S is the possible outcome of the experiment. Note that if P(s) = 0, then we let the corresponding term, P(s) × [log2 1/P(s)], be 0 to avoid dividing by 0. Example F.3
bits
Conditional Entropy
Hmin(S) = 0 bits
In other words, the entropy of every probability sample space has a lower limit defined by the above formula.
The entropy of a probability sample space is between 0 bits and log2 n bits, where n is the number of possible outcomes.
for70220_appF.fm Page 615 Saturday, January 27, 2007 1:09 PM
APPENDIX F
Information Theory
In this appendix, we discuss several concepts from information theory that are related to topics discussed in this book.
The above two examples show that there is a relationship between the usefulness of an event and the expectation of the receiver. If the receiver is surprised when the event happens, the message contains a lot of information; otherwise, it does not. In other words, the information content of a message is inversely related to the probability of the occurrence of that message. If the event is very probable, it does not contain any information (Example F.1); if it is very improbable, it contains a lot of information (Example F.2).
相关文档
最新文档