哈佛大学博弈论课件

合集下载

博弈论(哈佛大学原版教程)

博弈论(哈佛大学原版教程)
1
Note, that a pure Nash equilibrium is a (degenerate) mixed equilibrium, too.
1
the behavior of real-world people rather than come up with an explanation of how they should play a game, then the notion of NE and even even IDSDS can be too restricting. Behavioral game theory has tried to weaken the joint assumptions of rationality and common knowledge in order to come up with better theories of how real people play real games. Anyone interested should take David Laibson’s course next year. Despite these reservation about Nash equilibrium it is still a very useful benchmark and a starting point for any game analysis. In the following we will go through three proofs of the Existence Theorem using various levels of mathematical sophistication: • existence in 2 × 2 games using elementary techniques • existence in 2 × 2 games using a fixed point approach • general existence theorem in finite games You are only required to understand the simplest approach. The rest is for the intellectually curious.

博弈论(哈佛大学原版教程)

博弈论(哈佛大学原版教程)

Lecture XVII:Dynamic Games withIncomplete InformationMarkus M.M¨o biusMay6,2004•Gibbons,sections4.1and4.2•Osborne,chapter101IntroductionIn the last two lectures I introduced the idea of incomplete information. We analyzed some important simultaneous move games such as sealed bid auctions and public goods.In practice,almost all of the interesting models with incomplete informa-tion are dynamic games also.Before we talk about these games we’ll need a new solution concept called Perfect Bayesian Equilibrium.Intuitively,PBE is to extensive form games with incomplete games what SPE is to extensive form games with complete information.The concept we did last time,BNE is a simply the familiar Nash equilibrium under the Harsanyi representation of incomplete information.In principle,we could use the Harsanyi representation and SPE in dynamic games of incomplete information.However,dynamic games with incomplete information typi-cally don’t have enough subgames to do SPE.Therefore,many’non-credible’threats are possible again and we get too many unreasonable SPE’s.PBE allows subgame reasoning at information sets which are not single nodes whereas SPE only applies at single node information sets of players(because only those can be part of a proper subgame).The following example illustrates some problems with SPE.11.1Example I-SPEOurIts unique SPE is(R,B).The next game looks formally the same-however,SPE is the same as NEThe old SPE survives-all(pR+(1−p)RR,B)for all p is SPE.But there are suddenly strange SPE such as(L,qA+(1−q)B)for q≥1.Player2’s2strategy looks like an non-credible threat again-but out notion of SPE can’t rule it out!Remember:SPE can fail to rule out actions which are not optimal given any’beliefs’about uncertainty.Remark1This problem becomes severe with incomplete information:moves of Nature are not observed by one or both players.Hence the resulting exten-sive form game will have no or few subgames.This and the above example illustrate the need to replace the concept of a’subgame’with the concept of a ’continuation game’.1.2Example II:Spence’s Job-Market SignallingThe most famous example of dynamic game with incomplete information is Spence’s signalling game.There are two players-afirm and a worker.The worker has some private information about his ability and has the option of acquiring some cation is always costly,but less so for more able workers.However,education does not improve the worker’s productivity at all!In Spence’s model education merely serves as a signal tofirms.His model allows equilibria where able workers will acquire educa-tion and less able workers won’t.Hencefirms will pay high wages only to those who acquired education-however,they do this because education has revealed the type of the player rather than improved his productivity.Clearly this is an extreme assumption-in reality education has presum-ably dual roles:there is some signalling and some productivity enhancement. But it is an intriguing insight that education might be nothing more than a costly signal which allows more able workers to differentiate themselves from less able ones.Let’s look at the formal set-up of the game:•Stage0:Nature chooses the abilityθof a worker.SupposeΘ={2,3} and that P rob(θ=2)=p and P rob(θ=3)=1−p.•Stage I:Player1(worker)observes his type and chooses eduction levele∈{0,1}.Education has cost ceθ.Note,that higher ability workershave lower cost and that getting no education is costless.•Stage II:Player2(the competitive labor market)chooses the wage rate w(e)of workers after observing the education level.3Suppose that u1(e,w,;θ)=w−ceθand that u2(e,w;θ)=−(w−θ)2.Note,that education does not enter thefirm’s utility function.Also note, that the BR of thefirm is to set wages equal to the expected ability of the worker under this utility function.This is exactly what the competitive labor market would do ifθis equal to the productivity of a worker(the dollar amount of output he produces).If afirm pays above the expected productivity it will run a loss,and if it pays below some otherfirm would come in and offer more to the worker.So the market should offer exactly the expected productivity.The particular(rather strange-looking)utility function we have chosen implements the market outcome with a singlefirm -it’s a simple shortcut.Spence’s game is a signalling game.Each signalling game has the same three-part structure:nature chooses types,the sender(worker)observes his type and takes and action,the receiver(firm)sees that action but not the worker’s type.Hence thefirm tries to deduce the worker’s type using his action.His action therefore serves as a signal.Spence’s game is extreme because the signal(education)has no value to thefirm except for its sig-nalling function.This is not the case for all signalling models:think of a car manufacturer who can be of low or high quality and wants to send a signal to the consumer that he is a high-quality producer.He can offer a short or long warranty for the car.The extended warranty will not only signal his type but also benefit the consumer.The(Harsanyi)extensive form representation of Spence’s game(and any other41.3Why does SPE concept together with Harsanyirepresentation not work?We could find the set of SPE in the Harsanyi representation of the game.The problem is that the game has no proper subgame in the second round when the firm makes its decision.Therefore,the firm can make unreasonable threats such as the following:both workers buy education and the firms pays the educated worker w =3−p (his expected productivity),and the uneducated worker gets w =−235.11(or something else).Clearly,every worker would get education,and the firm plays a BR to a worker getting education (check for yourself using the Harsanyi representation).However,the threat of paying a negative wage is unreasonable.Once the firm sees a worker who has no education it should realize that the worker has a least ability level 2and should therefore at least get a wage of w =2.2Perfect Bayesian EquilibriumLet G be a multistage game with incomplete information and observed ac-tions in the Harsanyi representation.Write Θi for the set of possible types for player i and H i to be the set of possible information sets of player i .For each information set h i ∈H i denote the set of nodes in the information set with X i (h i )and X H i X i (h i ).A strategy in G is a function s i :H i →∆(A i ).Beliefs are a function µi :H i →∆(X i )such that the support of belief µi (h i )is within X i (h i ).Definition 1A PBE is a strategy profile s ∗together with a belief system µsuch that1.At every information set strategies are optimal given beliefs and oppo-nents’strategies (sequential rationality ).σ∗i (h )maximizes E µi (x |h i )u i σi ,σ∗−i |h,θi ,θ−i2.Beliefs are always updated according to Bayes rule when applicable .The first requirement replaces subgame perfection.The second requirement makes sure that beliefs are derived in a rational manner -assuming that you know the other players’strategies you try to derive as many beliefs as5possible.Branches of the game tree which are reached with zero probability cannot be derived using Bayes rule:here you can choose arbitrary beliefs. However,the precise specification will typically matter for deciding whether an equilibrium is PBE or not.Remark2In the case of complete information and observed actions PBE reduces to SPE because beliefs are trivial:each information set is a singleton and the belief you attach to being there(given that you are in the correspond-ing information set)is simply1.2.1What’s Bayes Rule?There is a close connection between agent’s actions and their beliefs.Think of job signalling game.We have to specify the beliefs of thefirm in the second stage when it does not know for sure the current node,but only the information set.Let’s go through various strategies of the worker:•The high ability worker gets education and the low ability worker does not:e(θ=2)=0and e(θ=3)=1.In this case my beliefs at the information set e=1should be P rob(High|e=1)=1and similarly, P rob(High|e=0)=0.•Both workers get education.In this case,we should have:P rob(High|e=1)=1−p(1) The beliefs after observing e=0cannot be determined by Bayes rule because it’s a probability zero event-we should never see it if players follow their actions.This means that we can choose beliefs freely at this information set.•The high ability worker gets education and the low ability worker gets education with probability q.This case is less trivial.What’s the prob-ability of seeing worker get education-it’s1−p+pq.What’s the probability of a worker being high ability and getting education?It’s 1−p.Hence the probability that the worker is high ability after wehave observed him getting education is1−p1−p+pq .This is the non-trivialpart of Bayes rule.6Formally,we can derive the beliefs at some information set h i of player i as follows.There is a probability p(θj)that the other player is of typeθj. These probabilities are determined by nature.Player j(i.e.the worker)has taken some action a j such that the information set h i was reached.Eachtype of player j takes action a j with some probabilityσ∗j (a j|θj)according tohis equilibrium strategy.Applying Bayes rule we can then derive the belief of player i that player j has typeθj at information set h i:µi(θj|a j)=p(θj)σ∗j(a j|θj)˜θj∈Θjp˜θjσ∗ja j|˜θj(2)1.In the job signalling game with separating beliefs Bayes rule gives usexactly what we expect-we belief that a worker who gets education is high type.2.In the pooling case Bayes rule gives us P rob(High|e=1)=1−pp×1+(1−p)×1=1−p.Note,that Bayes rule does NOT apply forfinding the beliefs after observing e=0because the denominator is zero.3.In the semi-pooling case we get P rob(High|e=1)=(1−p)×1p×q+(1−p)×1.Sim-ilarly,P rob(High|e=0)=(1−p)×0p×(1−q)+(1−p)×0=0.3Signalling Games and PBEIt turns out that signalling games are a very important class of dynamic games with incomplete information in applications.Because the PBE con-cept is much easier to state for the signalling game environment we define it once again in this section for signalling games.You should convince yourself that the more general definition from the previous section reduces to the definition below in the case of signalling games.3.1Definitions and ExamplesEvery signalling game has a sender,a receiver and two periods.The sender has private information about his type and can take an action in thefirst action.The receiver observes the action(signal)but not the type of the sender,and takes his action in return.7•Stage0:Nature chooses the typeθ1∈Θ1of player1from probability distribution p.•Stage1:Player1observesθ1and chooses a1∈A1.•Stage2:Player2observes a1and chooses a2∈A2.The players utilities are:u1=u1(a1,a2;θ1)(3)u2=u2(a1,a2;θ1)(4) 3.1.1Example1:Spence’s Job Signalling Game•worker is sender;firm is receiver•θis the ability of the worker(private information to him)•A1={educ,no educ}•A2={wage rate}3.1.2Example2:Initial Public Offering•player1-owner of privatefirm•player2-potential investor•Θ-future profitability•A1-fraction of company retained•A2-price paid by investor for stock3.1.3Example3:Monetary Policy•player1=FED•player2-firms•Θ-Fed’s preference for inflation/unemployment•A1-first period inflation•A2-expectation of second period inflation83.1.4Example4:Pretrial Negotiation•player1-defendant•player2-plaintiff•Θ-extent of defendant’s negligence•A1-settlement offer•A2-accept/reject3.2PBE in Signalling GamesA PBE in the signalling game is a strategy profile(s∗1(θ1),s∗2(a1))togetherwith beliefsµ2(θ1|a1)for player2such that1.Players strategies are optimal given their beliefs and the opponents’strategies,i.e.s∗1(θ1)maximizes u1(a1,s∗2(a1);θ1)for allθ1∈Θ1(5)s∗2(a1)maximizesθ1∈Θ1u2(a1,a2;θ1)µ2(θ1|a1)for all a1∈A1(6)2.Player2’s beliefs are compatible with Bayes’rule.If any type of player1plays a1with positive probability thenµ2(θ1|a1)=p(θ1)P rob(s∗1(θ1)=a1)θ1∈Θ1p(θ1)P rob(s∗1(θ1)=a1)for allθ1∈Θ13.3Types of PBE in Signalling GamesTo help solve for PBE’s it helps to think of all PBE’s as taking one of the following three forms”1.Separating-different types take different actions and player2learnstype from observing the action2.Pooling-all types of player1take same action;no info revealed3.Semi-Separating-one or more types mixes;partial learning(oftenonly type of equilibrium9Remark3In the second stage of the education game the”market”must have an expectation that player1is typeθ=2and attach probabilityµ(2|a1) to the player being type2.The wage in the second period must be between 2and3.This rules out the unreasonable threat of the NE I gave you in the education game(with negative wages).1Remark4In the education game suppose the equilibrium strategies ares∗1(θ=2)=0and s∗1(θ=3)=1,i.e.only high types get education.Thenfor any prior(p,1−p)at the start of the game beliefs must be:µ2(θ=2|e=0)=1µ2(θ=3|e=0)=0µ2(θ=2|e=1)=0µ2(θ=3|e=1)=1If player1’s strategy is s∗1(θ=2)=12×0+12×1and s∗1(θ=3)=1:µ2(θ=2|e=0)=1µ2(θ=3|e=0)=0µ2(θ=2|e=1)=p2p+1−p=p2−pµ2(θ=3|e=1)=2−2p 2−pAlso note,that Bayes rule does NOT apply after an actions which should notoccur in equilibrium.Suppose s∗1(θ=2)=s∗1(θ=3)=1then it’s OK toassumeµ2(θ=2|e=0)=57 64µ2(θ=3|e=0)=7 64µ2(θ=2|e=1)=pµ2(θ=3|e=1)=1−pThefirst pair is arbitrary.1It also rules out unreasonable SPE in the example SPE I which I have initially.Under any beliefs player2should strictly prefer B.10Remark5Examples SPE II and SPE III from the introduction now make sense-if players update according to Bayes rule we get the’reasonable’beliefs of players of being with equal probability in one of the two nodes.4Solving the Job Signalling GameFinally,after11tough pages we can solve our signalling game.The solution depends mainly on the cost parameter c.4.1Intermediate Costs2≤c≤3A separating equilibrium of the model is when only the able worker buys education and thefirm pays wage2to the worker without education and wage3to the worker with education.Thefirm beliefs that the worker is able iffhe gets educated.•The beliefs are consistent with the equilibrium strategy profile.•Now look at optimality.Player2sets the wage to the expected wage so he is maximizing.•Player1of typeθ=2gets3−c2≤2for a1=1and2for a1=0.Hence he should not buy education.•Player1of typeθ=3gets3−c3≥2for a1=1and2for a1=0.Hence he should get educated.1.Note that for too small or too big costs there is no separating PBE.2.There is no separating PBE where theθ=2type gets an educationand theθ=3type does not.4.2Small Costs c≤1A pooling equilibrium of the model is that both workers buy education and that thefirm pays wage w=3−p if it observes education,and wage2 otherwise.Thefirm believes that the worker is able with probability1−p if it observes education,and that the worker is of low ability if it observes no education.11•The beliefs are consistent with Bayes rule for e=1.If e=0has been observed Bayes rule does not apply because e=0should never occur -hence any belief isfine.The belief that the worker is low type if he does not get education makes sure that the worker gets punished for not getting educated.•Thefirm pays expected wage-hence it’s optimal response.The lowability guy won’t deviate as long2.5−c2≥2and the high ability typewon’t deviate as long as2.5−c3≥2.For c≤1both conditions aretrue.1.While this pooling equilibrium only works for small c there is alwaysanother pooling equilibrium where no worker gets education and the firms thinks that any worker who gets education is of the low type. 4.31<c<2Assume that p=12for this section.In the parameter range1<c<2there isa semiseparating PBE of the model.The high ability worker buys education and the low ability worker buys education with positive probability q.Thewage is w=2if thefirm observes no education and set to w=2+11+q ifit observes education.The beliefs that the worker is high type is zero if he gets no education and11+qif he does.•Beliefs are consistent(check!).•Firm plays BR.•Player1of low type won’t deviate as long as2+11+q −c2≤2.•Player1of high type won’t deviate as long as2+11+q −c3≥2.Set1+q=2c .It’s easy to check that thefirst condition is binding andthe second condition is strictly true.So we are done if we choose q=2c −1.Note,that as c→2we get back the separating equilibrium and as c→1we get the pooling one.12。

哈佛博弈课

哈佛博弈课
哈佛博弈课
读书笔记模板
01 思维导图
03 读书笔记 05 目录分析
目录
02 内容摘要 04 精彩摘录 06 作者介绍
思维导图
关键字分析思维导图
生活
意愿
剖析 课
技巧
信息
成本

参与者
博弈论 博弈
原则
哈佛
策略
方向
选择
艺术
理性
重要性
内容摘要
这是一本关于哈佛博弈课的经管书,此书将博弈论层层剖析,结合生活中的例子,结合当今社会热点,将我 们生活中可能用到的博弈做出讲解,有利于人们在生活中运用。读此书,让你“知己知彼”,了解对方的想法, 了解他们知道些什么,是什么激励着他们,甚至他们是怎么看你的;读此书,让你“洞察先机”,让你的策略选 择不能完全出于你的主观意愿,还能考虑到其他参与者如何行动;读此书,让你“影响对方”,扭转对方的思维, 做到进可攻退可守,瞬间提高决断力,引导并利用他人的情绪,把胜利导向自己,扭转对方的思维,做到进可攻 退可守。
第二课这,就是博 弈
第一课不一样的博 弈
第三课博弈给我们 带来什么
绑住自己的手 蚂蚁和狮子的策略 整体与联盟的较量 困扰NBA的高薪难题 非合作的选择
策略性的互动决策 博弈的构成要素 什么叫做理性 无处不在的对局
实现均衡 零和博弈 非零和博弈 负和博弈 正和博弈 多次博弈与单次博弈
第五课蜈蚣的博弈
强硬与温和的演进 威胁与承诺并举
双赢会持续吗 找到自己的位置与方向
搭便车 搭便车行为无处不在
第十一课你不是你 第十二课最优收益
第十三课信息的对称
第十四课不要做这样 的笨蛋
第十六课沟通的技 巧
第十五课你到底想 要什么

section2(博弈论讲义(Harvard University))

section2(博弈论讲义(Harvard University))
GOV 2005: Game Theory
Section 2: Externalities
Alexis Diamond adiamond@
Agenda
• Key terms and definitions • Complementarity and cross-partial derivatives)
• πi = 2(i + j + cij) - i2 • πj = 2(i + j + cij) - j2
• Best response functions:
– d (πi)/di = 2+2cj - 2i – • i*=1+cj … This is our BR function for agent i
Nash Eqm (TR & DC)
strategy profiles
Pareto Optima
Set of Rationalizable Strategies: {T,D} x {C,R} Set of Weakly Congruent Strategies: {T,D} x {C,R} Set of Best Response Complete: {T,D} x {C, R} or {T,D} x {L,C,R} Set of Congruent Strategies: {T,D} x {C, R}
Cournot Oligopoly: Explanation
Positive Cross-Partial Derivatives = Complementarity
π1 = (1000 - q1 - q2)q1 - 100q1 d (π1)/dq1 = 1000 - 2q1 - q2 - 100 d ((π1)/dq1)/dq2 = -1 < 0 • Why? What does this mean?

哈佛大学博弈论课程ppt--section5

哈佛大学博弈论课程ppt--section5

Challenger Out
In
Incumbent Fight (0, 0)
ø
What’s the solution?
(1, 2) (use backward induction)
When Backward Induction Fails
A Variant of the Entry Game (Empty Threat Game)
– Challenger’s preferences represented by payoff function u1
• u1(In, Acquiesce) = 2, u1(Out) = 1, u1(In, Fight) = 0 Acquiesce) Out) Fight)
– Incumbent’s preferences represented by payoff function u2
Key Terms
• Sequential Rationality: Players should be rational at every decision-making opportunity (information set) decision• Backward Induction: Process of analyzing a game from back to front, eliminating actions which are dominated given the terminal nodes that would be reached
– – – – – – The strategic setting TwoTwo-round, 3 candidate closed rule game TwoTwo-round, n-candidate closed rule game nInfiniteInfinite-round, 3 candidate closed rule game OpenOpen-rule games Endogenizing the rules

博弈论(哈佛大学原版教程)

博弈论(哈佛大学原版教程)

1.2
Example II
L M R
U
2,2
1,1
4,0
D
1,2
4,1
3,5
2
In this game the unique Nash equilibrium is (U,L). It is easy to see that (U,L) is a NE because both players would lose from deviating to any other strategy. To show hat there are no other Nash equilibria we could check each other ∞ ∞ strategy profile, or note that S1 = {U } and S2 = {L} and use: Proposition 1 If s∗ is a pure strategy Nash equilibrium of G then s∗ ∈ S ∞ .
α−c 2β

qj 2
−c if qj ≤ αβ otherwise
The NE is the solution to q1 = BR1 (q2 ) and q2 = BR2 (q1 ). This system has exactly one solution. This can be shown algebraically or simply by looking at the intersections of the BR graphs in figure 1. Because of symmetry we know that q1 = q2 = q ∗ . Hence we obtain: q∗ = α − c q∗ − β 2

section10(博弈论讲义(Harvard University))

section10(博弈论讲义(Harvard University))

Principal-Agent Models&Incentive Compatibility/Participation Constraints Suppose Pat manages a large computer software company and Allen is a talented program designer.Pat would like to hire Allen to develop a new software package.If Allen works for Pat,Allen must choose whether or not to expend high effort or low effort on the job.At the end of the work cycle,Pat will learn whether the project is successful or not.A successful project yields revenue of6for thefirm,whereas the revenue is2if the project is unsuccessful.Success depends on Allen’s high effort as well as on a random factor.Specif-ically,if Allen expends high effort,then success will be achieved with probability 1/2;if Allen expends low effort,then the project is unsuccessful for sure.As-sume that,to epxend high effort,Allen must endure a personal cost of1.The parties can write a contract that specifies compensationfor Allen conditional on whether the project is successful(but it cannot be conditioned directly on Allen’s effort).Assume that Allen values money acording to the utility func-tion v A(x)=x a.Assume0<a<1.Finally,suppose Pat’s utility function is v P(x)=x.Imagine that the games interact as depicted above.At the beginning of the game,Pat offers Allen a wage and bonus package.The wage w is to be paid regardless of the project outcome,whereas the bonus b would be paid only if the project is successful.Then Allen decides whether or not to accept the contract. If he declines(N)the game ends,with Pat getting0and Allen obtaining utility of1(corresponding to his outside opportunities).If Allen accepts the contract(Y),then he decides whether to exert high(H) or low(L)effort.Low effort leads to an unsuccessful project,whereby Pat gets revenue of2minus the wage w and Allen gets his utility of the wage,w a. High effort leads to a chance node,where nature picks whether the project is successful(with probability1/2)or not(probability1/2).An unsuccessful project implies the same payoffs as with low effort,except that,in this case, Allen also pays his effort cost of1.A successful project raises Pat’s revenue to 6and trigers the bonus b paid to Allen in addition to the wage.Calculating the expected payoffs from the chance node,we can rewrite the game as shown above.To solve the game and learn about the interaction of risk and incentives, we use backward induction.Start by observing that Pat would certainly like Allen to exert high effort.In fact,it is inefficient for Allen to expend low effort, because Pat can compensate Allen for exerting high effort by increasing his pay by1(offsetting Allen’s effort cost).By doing so,Pat’s expected revenue increases by2.Another way of looking at this is that if Allen’s effort were verifiable,the parties would write a contract that induces high effort.Given that high effort is desired,we must ask whether there is a contract that induces it.Can Patfind a wage and bonus package that motivates Allen to exert high effort and Gives Pat a large payoff?What can be accomplished w/out a bonus?b=0implies Allen has no incentive to exert high effort;he gets w a when he chooses L and less(w−1)a,when he chooses H.The best that Pat can do without a bonus is to offer w=1,which Allen is willing to accept(he1will accept no less,given his outside option).Thus,the best no-bonus contract (w=1and b=0)yields the payoffvector(1,1).Next,consider a bonus contract,designed to induce high effort.In order for him to be motivated to exert high effort,Allen’s expected payofffrom H must be at least as great as is his payofffrom L:1 2(w+b−1)+12(w−1)a≥w a.In principal-agent models,this kind of inequality is usally called the effort constraint or incentive compatibility constraint.In addition to the effort con-straint,the contract must give Allen an expected payoffat least as great as the value of his outside opportunity:1 2(w+b−1)+12(w−1)a≥1Theorists call this participation constraint.Assuming she wants to motivate high effort,Pat will offer Allen the bonus contract satisfying the two inequalities above.2。

博弈论教学课件(全)

博弈论教学课件(全)

二、博弈论的经济学渊源
经济学的一些思想为博弈论提供了基础,其中最 重要的就是所谓的“理性人”。
描述理性人的工具就是所谓的理性偏好。为了方便, 我们又用效用函数(在博弈论中称为收益函数)来 表示偏好。
构成博弈论基础的一个重要的经济定理就是所谓的 理性选择原理:如果决策主体的偏好是理性的,那 么(有限)选择集中就一定存在最优选择,这个选 择可能是唯一的,也可能是多个。
定义2.1 博弈表达的基本式(或策略式)由博弈的参 与者N,策略空间S和收益函数u三个要素组成,即G = {N, S, u}。
这里需要注意的是,完全信息静态博弈在多数情况 下,策略就等同于行动,所以G={ A,u}。但严格来 讲,策略并不是行动。
我们可以通过一个例子来加以说明。
[例1] 进攻与防守
对称博弈和对称均衡能够大大节省工作量,这也是博弈论中所举例子通常为对 称博弈的原因。
对称博弈通俗说就是代表参与者身份的下标,在分析中可以省略掉而没有关系。
四、混合策略
博弈论里面最根本的问题是什么?就是均衡 的存在性。如果均衡不存在,所有的工作都 成了无用功,之所以引入混合策略,意义就 在这里,因为如果仅仅限制在纯策略的范围 内讨论博弈的话,均衡有可能是不存在的。
双方争夺一个据点,有两条进攻路线X和Y,攻方有 两个军,而防守方也有两个军,只有当守方的兵力 不少于攻方时,才能击退进攻,否则据点将会失守。
首先可知守方的防守方案(即策略)为(0,2),(1,1),(2,0),即在X线路和Y线路驻扎 军队数,同样可以到的攻方的进攻方案(0,2),(1,1)和(2,0)。容易看出,行动并非策 略,策略是行动方案。
需要注意的几个问题:
(1)表达同一个偏好的收益函数不唯一,但在 单调变换下却是唯一的。

博弈论-哈佛大学-第五节

博弈论-哈佛大学-第五节

Acquiesce Fight
Incumbent
1
In
ø
Out
Spend
Save
Fight 2
1
Acquiesce
Spend 1 Save
Challenger
ø
In
Out
Incumbent
Acquiesce
Fight
Challenger Hit Back
Back out
Extensive Games w/ Perfect Info
(1, 2)
• In a strategic game, NE rationale is that in steady state, each player’s experience playing the game leads her belief about the other players’ actions to be correct.
– The strategic setting – Two-round, 3 candidate closed rule game – Two-round, n-candidate closed rule game – Infinite-round, 3 candidate closed rule game – Open-rule games – Endogenizing the rules
• Terminal History: (In, Acquiesce), (In, Fight), and (Out)
• Player Function: P(ø) = challenger , P(In) = incumbent
• Preferences for the Players:

section3(博弈论讲义(Harvard University))

section3(博弈论讲义(Harvard University))

Modus Operandi
(1) encourage honest information-sharing, and Are there systems which (2) use that information in a reasonable way?
• First: Define the setting
X 1 2 3 B A C Y B C A Z C B A
Is social order in Condorcet world going to match the social order in the Alternative world?
B vs. C? A vs. B? A vs. C?
Arrow’s Theorem: Step II
• Positive responsiveness
– Departures from a tie must break a tie
So, what is still admissible?
May’s Theorem: Step III
May‟s Thm.
A social welfare functional F(p1, p2, … , pN) corresponds to majority voting iff it satisfies: (1) symmetry among agents; (2) neutrality among alternatives; and (3) positive responsiveness
– Only my A vs. B is relevant to society‟s A vs. B – For its A vs. B, society only needs my A vs. B

博弈论(哈佛大学原版教程)

博弈论(哈佛大学原版教程)
The probability of player i winning the auction with bidding b is
P rob(fj(vj) ≤ b) = P rob(vj ≤ fj−1(b)) = F (fj−1(b)) = fj−1(b)
(1)
The last equation follows because F is the uniform distribution. The expected utility from bidding b is therefore:
In all our auctions there are n participants and each participant has a valuation vi and submits a bid bi (his action).
The rules of the auction determine the probability qi(b1, .., bn) that agent i wins the auction and the expected price pi(b1, .., bn) which he pays. His utility is simple ui = qivi − pi.a
(vi

f (vi))

vi
=
0
(4)
This is a differential equation and can be rewritten as follows:
vi = vif (vi) + f (vi)
(5)
We can integrate both sides and get:
1 2
vi2

哈佛博弈论讲义section1

哈佛博弈论讲义section1

Example: Alliance Formation
• Weakly dominant versus strongly (strictly dominant) strategies
Mixed Strategy
• Moves are chosen randomly from the set of pure strategies
• Every simultaneous move game has a Nash equilibrium in mixed strategies
Game Theory is just one of several formal modeling approaches
General Equilibrium: interactions among many (infinite) agents, where any one agent’s actio.ns have no effect on other agents.
Player 1 Up
Player 2
Left -8, 10
Right -10, 9
Down
7, 6
6, 5
Expected Value:
Avg payoff for pla. ying with prob distribution p, given others play strategies with prob distribution q
Three or Four Important Concepts
Beliefs: Probability distribution over strategies of other players
Dominant Strategy no matter what your opponent does

section7(博弈论讲义(Harvard University))

section7(博弈论讲义(Harvard University))

Payoffs if 1’s building costs are high State of the World SH
Payoffs if 1’s building costs are low State of the World SL
What is Player 1’s optimal strategy?
Let p1 denote prior probability player 2 assigns to p(SH); implies p(SL) = 1-p1 Let x = player 1’s probability of building when her cost is low; Let y = player 2’s probability of entry Already shown that the best response for Low-Cost Player 1 is identified by: Build (x = 1) if 1.5*y +3.5*(1-y) > 2y+3*(1-y), so when y < ½
Modifying the Example
PLAYER 2 PLAYER 2
P L A Y E R 1
Build Don’t Build
Enter Don’t Enter 0, -1 2, 0 2, 1 3, 0
P L A Y E R 1
Enter Don’t Enter Build 1.5, -1 3.5, 0 Don’t Build 2, 1 3, 0
Payoffs if 1’s building costs are high State of the World SH
Payoffs if 1’s building costs are low State of the World SL

博弈论(哈佛大学原版教程)

博弈论(哈佛大学原版教程)

Lecture VIII:LearningMarkus M.M¨o biusMarch10,2004Learning and evolution are the second set of topics which are not discussed in the two main texts.I tried to make the lecture notes self-contained.•Fudenberg and Levine(1998),The Theory of Learning in Games,Chap-ter1and21IntroductionWhat are the problems with Nash equilibrium?It has been argued that Nash equilibrium are a reasonable minimum requirement for how people should play games(although this is debatable as some of our experiments have shown).It has been suggested that players should be able tofigure out Nash equilibria starting from the assumption that the rules of the game,the players’rationality and the payofffunctions are all common knowledge.1As Fudenberg and Levine(1998)have pointed out,there are some important conceptual and empirical problems associated with this line of reasoning:1.If there are multiple equilibria it is not clear how agents can coordinatetheir beliefs about each other’s play by pure introspection.mon knowledge of rationality and about the game itself can bedifficult to establish.1We haven’t discussed the connection between knowledge and Nash equilibrium.As-sume that there is a Nash equilibriumσ∗in a two player game and that each player’s best-response is unique.In this case player1knows that player2will playσ∗2in response toσ∗1,player2knows that player1knows this mon knowledge is important for the same reason that it matters in the coordinated attack game we discussed earlier.Each player might be unwilling to play her prescribed strategy if she is not absolutely certain that the other play will do the same.13.Equilibrium theory does a poor job explaining play in early rounds ofmost experiments,although it does much better in later rounds.This shift from non-equilibrium to equilibrium play is difficult to reconcile with a purely introspective theory.1.1Learning or Evolution?There are two main ways to model the processes according to which players change their strategies they are using to play a game.A learning model is any model that specifies the learning rules used by individual players and examines their interaction when the game(or games)is played repeatedly. These types of models will be the subject of today’s lecture.Learning models quickly become very complex when there are many play-ers involved.Evolutionary models do not specifically model the learning process at the individual level.The basic assumption there is that some un-specified process at the individual level leads the population as a whole to adopt strategies that yield improved payoffs.These type of models will the subject of the next few lectures.1.2Population Size and MatchingThe natural starting point for any learning(or evolutionary)model is the case offixed players.Typically,we will only look at2by2games which are played repeatedly between these twofixed players.Each player faces the task of inferring future play from the past behavior of agents.There is a serious drawback from working withfixed agents.Due to the repeated interaction in every game players might have an incentive to influence the future play of their opponent.For example,in most learning models players will defect in a Prisoner’s dilemma because cooperation is strictly dominated for any beliefs I might hold about my opponent.However, if I interact frequently with the same opponent,I might try to cooperate in order to’teach’the opponent that I am a cooperator.We will see in a future lecture that such behavior can be in deed a Nash equilibrium in a repeated game.There are several ways in which repeated play considerations can be as-sumed away.1.We can imagine that players are locked into their actions for quite2a while(they invest infrequently,can’t build a new factory overnightetc.)and that their discount factors(the factor by which they weight the future)is small compared that lock-in length.It them makes sense to treat agents as approximately myopic when making their decisions.2.An alternative is to dispense with thefixed player assumption,andinstead assume that agents are drawn from a large population and are randomly matched against each other to play games.In this case,it is very unlikely that I encounter a recent opponent in a round in the near future.This breaks the strategic links between the rounds and allows us to treat agents as approximately myopic again(i.e.they maximize their short-term payoffs).2Cournot AdjustmentIn the Cournot adjustment model twofixed players move sequentially and choose a best response to the play of their opponent in the last period.The model was originally developed to explain learning in the Cournotmodel.Firms start from some initial output combination(q01,q02).In thefirst round bothfirms adapt their output to be the best response to q02.Theytherefore play(BR1(q02),BR2(q01)).This process is repeated and it can be easily seen that in the case of linear demand and constant marginal costs the process converges to the unique Nash equilibrium.If there are several Nash equilibria the initial conditions will determine which equilibrium is selected.2.1Problem with Cournot LearningThere are two main problems:•Firms are pretty dim-witted.They adjust their strategies today as if they expectfirms to play the same strategy as yesterday.•In each period play can actually change quite a lot.Intelligentfirms should anticipate their opponents play in the future and react accord-ingly.Intuitively,this should speed up the adjustment process.3Cournot adjustment can be made more realistic by assuming thatfirms are’locked in’for some time and that they move alternately.Firms1moves in period1,3,5,...andfirm2moves in periods2,4,6,..Starting from someinitial play(q01,q02),firms will play(q11,q02)in round1and(q11,q22)in round2.Clearly,the Cournot dynamics with alternate moves has the same long-run behavior as the Cournot dynamics with simultaneous moves.Cournot adjustment will be approximately optimal forfirms if the lock-in period is large compared to the discount rate offirms.The less locked-in firms are the smaller the discount rate(the discount rate is the weight on next period’s profits).Of course,the problem with the lock-in interpretation is the fact that it is not really a model of learning anymore.Learning is irrelevant becausefirms choose their optimal action in each period.3Fictitious PlayIn the process offictitious play players assume that their opponents strategies are drawn from some stationary but unknown distribution.As in the Cournot adjustment model we restrict attention to afixed two-player setting.We also assume that the strategy sets of both players arefinite.4Infictitious play players choose the best response to their assessment of their opponent’s strategy.Each player has some exogenously given weightingfunctionκ0i :S−i→ +.After each period the weights are updated by adding1to each opponent strategy each time is has been played:κt i (s−i)=κt−1i(s−i)if s−i=s t−1−iκt−1i(s−i)+1if s−i=s t−1−iPlayer i assigns probabilityγti(s−i)to strategy profile s−i:γt i (s−i)=κti(s−i)˜s−i∈S−iκti(˜s−i)The player then chooses a pure strategy which is a best response to his assessment of other players’strategy profiles.Note that there is not neces-sarily a unique best-response to every assessment-hencefictitious play is not always unique.We also define the empirical distribution d ti (s i)of each player’s strategiesasd t i (s i)=t˜t=0I˜t(s i)tThe indicator function is set to1if the strategy has been played in period˜tand0otherwise.Note,that as t→∞the empirical distribution d tj of playerj’s strategies approximate the weighting functionκti (since in a two playergame we have j=−i).Remark1The updating of the weighting function looks intuitive but also somewhat arbitrary.It can be made more rigorous in the following way. Assume,that there are n strategy profiles in S−i and that each profile is played by player i’s opponents’with probability p(s−i).Agent i has a prior belief according to which these probabilities are distributed.This prior is a Dirichlet distribution whose parameters depend on the weighting function. After each round agents update their prior:it can be shown that the posterior belief is again Dirichlet and the parameters of the posterior depend now on the updated weighting function.3.1Asymptotic BehaviorWillfictitious play converge to a Nash equilibrium?The next proposition gives a partial answer.5Proposition1If s is a strict Nash equilibrium,and s is played at date t in the process offictitious play,s is played at all subsequent dates.That is,strict Nash equilibria are absorbing for the process offictitious play.Furthermore, any pure-strategy steady state offictitious play must be a Nash equilibrium. Proof:Assume that s=(s i,s j)is played at time t.This implies that s i isa best-response to player i’s assessment at time t.But his next periodassessment will put higher relative weight on strategy s j.Because s i isa BR to s j and the old assessment it will be also a best-response to theupdated assessment.Conversely,iffictitious play gets stuck in some pure steady state then players’assessment converge to the empirical distribution.If the steady state is not Nash players would eventually deviate.A corollary of the above result is thatfictitious play cannot converge to a pure steady state in a game which has only mixed Nash equilibria such as matchingpennies.6distribution over each player’s strategies are converging to12,12-this isprecisely the unique mixed Nash equilibrium.This observation leads to a general result.Proposition2Underfictitious play,if the empirical distributions over each player’s choices converge,the strategy profile corresponding to the product of these distributions is a Nash equilibrium.Proof:Assume that there is a profitable deviation.Then in the limit at least one player should deviate-but this contradicts the assumption that strategies converge.These results don’t tell us whenfictitious play converges.The next the-orem does precisely that.Theorem1Underfictitious play the empirical distributions converge if the stage has generic payoffs and is22,or zero sum,or is solvable by iterated strict dominance.We won’t prove this theorem in this lecture.However,it is intuitively clear whyfictitious play observes IDSDS.A strictly dominated strategy can never be a best response.Therefore,in the limitfictitious play should put zero relative weight on it.But then all strategies deleted in the second step can never be best responses and should have zero weight as well etc.3.2Non-Convergence is PossibleFictitious play does not have to converge at all.An example for that is due to Shapley.7(M,R),(M,L),(D,L),(D,M),(T,M).One can show that the number of time each profile is played increases at a fast enough rate such that the play never converges.Also note,that the diagonal entries are never played.3.3Pathological ConvergenceConvergence in the empirical distribution of strategy choices can be mis-leading even though the process converges to a Nash equilibrium.Take the following game:8riod the weights are2,√2,and both play B;the outcome is the alternatingsequence(B,B),(A,A),(B,B),and so on.The empirical frequencies of each player’s choices converge to1/2,1/2,which is the Nash equilibrium.The realized play is always on the diagonal,however,and both players receive payoff0each period.Another way of putting this is that the empirical joint distribution on pairs of actions does not equal the product of the two marginal distributions,so the empirical joint distribution corresponds to correlated as opposed to independent play.This type of correlation is very appealing.In particular,agents don’t seem to be smart enough to recognize cycles which they could exploit.Hence the attractive property of convergence to a Nash equilibrium can be misleading if the equilibrium is mixed.9。

博弈论的一些基本原理——哈佛博弈课

博弈论的一些基本原理——哈佛博弈课

博弈论的一些基本原理——哈佛博弈课太长不看版零和博弈:你死我活,他人的利益损害就是自己的利益所得。

非零和博弈:1、负和博弈:两败俱伤,参与者的利益都得到损害。

2、正和博弈:双赢,双方利益都能得到保证。

(作者的观点支持将竞争转化为正和博弈进行合作)全文约1284字,阅读需四分钟正文1. 纳什均衡:博弈每一方都在选择自己对自己最有利的策略,而不考虑其他人的利益。

纳什均衡是所有参与者最优策略的组合,不一定所有组合都能实现自己利益的最大化,但能使所有人的收益达到最大化的均衡状态。

(麦当劳和肯德基选址的例子:同类商家聚合选商店地址不可避免会出现更激烈的竞争,其结果是商家要提升自己的竞争力,明确市场地位、深入研究消费者需求,从产品、服务、促销等多方面进行改善,树立起区别其他门店的品牌形象。

如果每个竞争者都做到了这一点,就能发挥互补优势,形成磁铁效果。

既能维持现有客户群,还能吸引新的消费者。

)纳什均衡要求每个参与者都是理性的。

博弈根据是否可以达成具有约束力的协议分为合作博弈和非合作博弈。

合作博弈即正和博弈,非合作博弈分为零和博弈及负和博弈。

合作博弈:双方采取合作的方式,或是互相妥协的方式,使博弈双方的利益都能增加。

或是最起码一方的利益增加,另一方利益没有受到损害。

零和博弈:在零和博弈中,所有参与者的利润和亏损之和恰好等于零,赢家的利润来自于输家的亏损,双方不存在合作的可能。

(最简单理解的例子就是赌博,赌桌上赢家(个人看书觉得包括了庄家)赢得的钱是输家输的钱)用一个成语来说,就是你死我活。

非零和博弈既可能是正和博弈,有可能是负和博弈。

(概念还是这三个概念,只是和前面分类依据不同罢辽,不要迷糊=。

=)正和博弈是双赢,而负和博弈是两败俱伤。

负和博弈:博弈论承认人人都有利己的动机,人的一切行为都是为了自身利益的最大化。

但同时,博弈的本质在于参与者的的策略的相互影响、相互依存。

帮助别人有时反而能帮助自己,促使自己的个人收益最大化。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
• Weakly dominant versus strongly (strictly dominant) strategies
Mixed Strategy
• Moves are chosen randomly from the set of pure strategies
• Every simultaneous move game has a Nash equilibrium in mixed strategies
9
Formalization
• States are identified as (A, B, C)
• Their capabilities are (a,b,c)
– If B helps A, probability that A wins is (a + b)/(a + b + c) = PBA
Three or Four Important Concepts
Beliefs: Probability distribution over strategies of other players
Dominant Strategy
• One best strategy no matter what your opponent does
The world of our constructs is therefore the
desired island that is exempt from the flux
of blind and aimless ch ausation.
7
Applying Formal Theory
• Goal (research question) • Story (model) • Formalization (abstraction) • Analysis (manipulation) • Solution (definition of equilibrium) • Translation (external and internal validity)
GOV 2005: Game Theory
Section 1
Alexis Diamond adiamond@
h
1
Agenda
• What is game theory? • Exercise: convergence to equilibrium • Key terms and definitions • Formal theory: what Leo has to say about it • How do you do it (using formal theory) • Application: Alliance Formation • Final thoughts
– If B helps C, probability that C wins is (b + c)/(a + b + c) = PBC
• Amount that B values a victory by A is UBA • Cost to B of helping A is KBA
– Cost of neutibration, comparative statics, case studies
are all ways of assessing model validity
h
13
Final thoughts
• Given B is indifferent to allying when [b/(a + b + c)] (UBA -UBC) = (KBA -KBC), and
What is the point at which B is indifferent? PBA(UBA) + (1-PBA)(UBC) - KBA ) = PBC(UBC) + (1-PBC)(UBA) - KBC )
Simplifies to...
(PBA + PBC -1)(UBA -UBC) = (KBA -KBC) Simplifies to...
h
2
What Defines Game Theory?
Formal Analysis of Strategic Settings
• Mathematically precise • List of players • Complete description of allowable moves • Description of information • Specification of how actions cause outcomes • Specification of players’ preferences
positively associated with probability of winning – Each is rational and has its own interests at heart
Example taken from Altfieldhand Bueno de Mesquita 1979
Game Theory is just one of several formal modeling approaches
General Equilibrium: interactions among many (infinite) agents, where any one agent’s actiohns have no effect on other agents.8
[b/(a + b + c)] (UBA -Uh BC) = (KBA -KBC) 12
Translation: Validity
• What is the point at which B is indifferent? [b/(a + b + c)] (UBA -UBC) = (KBA -KBC)
Nash equilibrium: When each player is playing a best response to
the strategiehs of the others
5
Key Terms and Definitions
• Normal form and extensive form
• B vs. nature: B moves, then either A or C wins
h
10
Analysis: Whither Alliances?
• Adopt the perspective of a player: B
• B’s utility from an alliance with A = PBA(UBA) + (1-PBA)(UBC) - KBA (1)
• [b/(a + b + c)] = resources B can contribute • (UBA -UBC) = B’s motivation for A vs. C • (KBA -KBC) = B’s costs for A vs. C
Equilibrium, perturbed: • If the left-hand side is > the right-hand side
• Pareto optimality: nohfree lunch
6
Formal Theory: What’s it good for?
“We understand only what we make” --Leo Strauss
We have absolutely certain or scientific knowledge only of those subjects of which we are the causes… The construction must be conscious construction; it is impossible to know a scientific truth without knowing at the same time that we have made it.
Example: Alliance Formation
• Goal:
– Gain insight into dynamics of power and motivation – What motivates countries to form alliances?
• Story:
– Setting: 3 countries fighting a war – Each has power (military capability), which is
then B would rather partner with A • What real-world evehnts might prompt this?14
• [b/(a + b + c)] = resources B can contribute • (UBA -UBC) = B’s motivation for A vs. C • (KBA -KBC) = B’s costs for A vs. C
The decision to help in a dispute depends on one’s ability to influence the outcome, one’s level of motivation, and the costliness of getting involved
Player 1 Up
Player 2
Left -8, 10
Right -10, 9
Down
7, 6
6, 5
相关文档
最新文档