On the Performance of Dijkstra’s Third Self-Stabilizing Algorithm for Mutual Exclusion

合集下载
相关主题
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

On the Performance of Dijkstra’s Third Self-Stabilizing Algorithm for Mutual Exclusion Viacheslav Chernoy1Mordechai Shalom2
Shmuel Zaks1
1Department of Computer Science,Technion,Haifa,Israel
vchernoy@tx.technion.ac.il,zaks@cs.technion.ac.il
2TelHai Academic College,Upper Galilee,12210,Israel
cmshalom@telhai.ac.il
Abstract
In[7]Dijkstra introduced the notion of self-stabilizing algorithms,and presented three such algorithms for the problem of mutual exclusion on
a ring of processors.The third algorithm is the most interesting of these
three,but is rather non intuitive.In[8]a proof of its correctness was
presented,but the question of determining its worst case complexity–that
is,providing an upper bound on the number of moves of this algorithm
until it stabilizes–remained open.In this paper we solve this question,
and prove an upper bound of O(n2)(n being the size of the ring)for this
algorithm’s complexity.This complexity applies to a centralized as well
as to a distributed scheduler.
1Introduction
The notion of self stabilization was introduced by Dijkstra in[7].He consid-ers a system,consisting of a set of processors,and each running a program of the form:if condition then statement.A processor is termed privileged if its condition is satisfied.A scheduler chooses any privileged processor,which then executes its statement(i.e.,makes a move);if there are several privi-leged processor,the scheduler chooses any of them.Such a scheduler is termed centralized.A scheduler that chooses any subset of the privileged processors, that are then making their moves simultaneously,is termed distributed.Thus, starting from any initial configuration,we get sequences of moves(termed exe-cutions).The scheduler thus determines all possible executions of the system.
A specific subset of the configurations is termed legitimate.The system is self-stabilizing if any possible execution will eventually get-that is,after afinite number of moves-only to legitimate configurations.The number of moves from any initial configuration until the system stabilizes is often referred to as stabilization time(see,e.g.,[2,6,12,15]).
Dijkstra studied in[7]the fundamental problem of mutual exclusion,for which the subset of legitimate configurations includes the configurations in
which exactly one processor is privileged.In[7]the processors are arranged in a ring,so that each processor can communicate with its two neighbors using a shared memory,and where not all processors use the same program.Three algorithms were presented–without correctness or complexity proofs–in which each processor could be in one of k>n,four and three states,respectively(n being the number of processors).A centralized scheduler was assumed.
The analysis-correctness and complexity-of Dijkstra’sfirst algorithm is rather straightforward.The correctness under a centralized scheduler is for any k≥n−1,and under a distributed scheduler for any k≥n.The stabilization time under a centralized scheduler isΘ(n2)(following[4]this is also the ex-pected number of moves).There is little in the literature regarding the second algorithm,probably since it was extended in[11]to general trees,or since more attention was devoted to the third algorithm,which is rather non-intuitive.For this latter algorithm Dijkstra presented in[8]a proof of correctness(another proof was given in[10],and a proof of correctness under a distributed scheduler was presented in[3]).Actually,it is only after[8]that an extensive study of the area of self-stabilization began,and expanded to a variety of directions(see, e.g.,[9,13]).
Though while dealing with proofs of correctness one can sometimes get also complexity results,this was not the case with this proof of[8].Referring to this Dijkstra’s third algorithm,the authors in[1]state that”The complexity study of this algorithm has never been made”;to the best of our knowledge,this statement is also true today.Moreover,the authors claim that”Surprisingly, no exact result on worst case stabilization time has been published.The reason for this is perhaps that Dijkstra’s algorithm does not monotonically converge towards a stabilized state.Some punctual bursts can momentarily lead it far from its goal”.The authors of[1]then proceed and present an algorithm,similar
to that of Dijkstra,and prove an upper bound of53
4n2for the stabilization time
of their algorithm.A lower bound ofΩ(n2)for this algorithm is known(see [14],and also Note1in Section2).
In this paper we provide an upper bound on the stabilization time of Di-jkstra’s third algorithm;specifically,we prove that the number of moves from any initial configuration until the system stabilizes is O(n2).We do so by extending the proof of[8].The result applies to a centralized scheduler as well as to a distributed one.
In Section2we present Dijkstra’s algorithm,and outline the details of the proof of[8]needed for our discussion.In Section3we present observations regarding the proof of[8],and then present our proof of the upper bound.
2Dijkstra’s algorithm
In this section we present Dijkstra’s third algorithm of[7](to which we refer throughout this paper as Dijkstra’s algorithm,or just the algorithm),and to its proof of correctness of[8].Following[8],our discussion assumes a centralized scheduler(we will get back to a distributed scheduler after Theorem2).
In[7]there are n processors p0,p1,...,p n−1,that are arranged in a ring;
that is,the processors adjacent to p i are p(i−1)mod n and p(i+1)mod n,for i= 0,1,...,n−1.Processor p i has a local state x i∈{0,1,2}.Two processors–namely,p0and p n−1–run special programs,while all intermediate processors p i,1≤i≤n−2,run the same program.The programs of the processors are as follows:
Program for processor p0:
if x0+1=x1then x0:=x0−1end.
Program for processor p i,1≤i≤n−2:
if(x i+1=x i−1)or(x i+1=x i+1)then x i:=x i+1end.
Program for processor p n−1:
if(x n−2=x0)and(x n−1=x0+1)then x n−1:=x0+1end.
Recall that the subset of legitimate configurations for this problem includes the configurations in which exactly one processor is privileged.The configura-tion x0=···=x n−1and x0=···=x i=x i+1=···=x n−1are legitimate(see also(7),(8)and(9)of Example3.2).
It is proved in[8]that this algorithm self stabilizes,and the system thus achieves mutual exclusion.In this proof the following notation is used.Given an initial configuration x0,x1,...,x n−1,and placing the processors on a line, consider each pair of neighbors p i−1and p i,for i=1,...,n−1(note that though p n−1and p0are neighbors on the ring,they are not considered here to be neighbors).Draw an arrow from x i to x i−1if x i=x i−1+1(termed left arrow),and from x i−1to x i if x i−1=x i+1(termed right arrow).In this paper we choose to denote a left arrow by’<’,and a right arrow by’>’.Thus,for each two neighboring processors with states x i−1and x i,either x i−1=x i,or x i−1<x i,or x i−1>x i.Recall that x i−1<x i means that x i−1is smaller by1than x i,and x i−1>x i means that x i−1is larger by1than x i,where all arithmetic is modulo3.For a given configuration C=x0,x1,...,x n−1,Dijkstra introduces the function
f(C)=#left arrows+2#right arrows.(1) Example.For n=6,a possible initial configuration C is
C:x0=1,x1=1,x2=0,x3=1,x4=2,x5=2.
This configuration will thus be denoted as
C:11>0<1<22.(2) For this configuration we have f(C)=1×2+2×1=4.
2 Note1Using the set of configurations x0>x1>···>x n−1as initial con-figurations one can easily derive theΩ(n2)lower bound for this algorithm(the details are left to the reader).
2 It follows immediately from(1)that for any configuration C of n processors
0≤f(C)≤2(n−1).(3) Equation(1)is used in[8]for the proof of correctness,as follows.There are eight possible moves of the system:one possible move for processor p0,five possible moves for any intermediate processor p i,0<i<n−1,and two possible moves for p n−1.These eight possibilities are summarized in Table1. In this table C1and C2denote the configurations before and after the move, respectively,and∆f=f(C2)−f(C1).In the table we show only the local parts of these configurations.For example,in thefirst row,p0is privileged; therefore in C1we have x0<x1,and in C2x0>x1,and since one left arrow is replaced by the right arrow,∆f=f(C2)−f(C1)=1.It is proved in[8]that each execution is infinite(that is,the scheduler can alwaysfind at least one privileged processor).Then it is shown that p0makes infinite number of moves. Then the execution is partitioned into phases,which start with a move of p0 and end just before its next move.It is argued that the function f decreases by at least1after each phase.By(3)it follows that the algorithm terminates after at most2(n-1)phases.
Case Processor C1C2∆f
0p0x0<x1x0>x1+1
1p i x i−1>x i=x i+1x i−1=x i>x i+10
2p i x i−1=x i<x i+1x i−1<x i=x i+10
3p i x i−1>x i<x i+1x i−1=x i=x i+1−3
4p i x i−1>x i>x i+1x i−1=x i<x i+1−3
5p i x i−1<x i<x i+1x i−1>x i=x i+10
6p n−1x n−2>x n−1x n−2<x n−1−1
7p n−1x n−2=x n−1x n−2<x n−1+1
Table1
Though the function f enables the proof of correctness of the algorithm, it cannot be used for analyzing its complexity(that is,the number of moves from any configuration until reaching a legitimate configuration).The reason for this is that in three cases(cases1,2and5in Table1)the function f does not change(that is,∆f=0),and therefore the change in the function cannot reflect the actual number of moves,that might be even unbounded.Indeed, the proof of[8]takes into account the moves of processors p0and p n−1and the cases3and4of the intermediate processors p i,but it does not consider the cases1,2and5.
This is the point one has to overcome in order to modify Dijkstra’s proof so that it will also enable the estimate of the complexity of the algorithm.This is what we are doing in the next section,in which wefirst get more insight into the properties of the algorithm and its proof,and then introduce a new function with which we are able to measure its complexity.
3Upper bound proof
In this section we present our main result for the upper bound of Dijkstra’s algorithm for n>2(the case n=2is trivial).Our discussion includes three steps.Wefirst introduce the functionˆf that is a slight modification of the function f(of(1)),with which we are able to get more properties of the behavior of the algorithm.We then introduce a new function g,and discuss its properties; this function enables us to deal with the complexity of the algorithm.Finally we put all of these properties together and provide a proof for the upper bound. These three steps are presented in Sections3.1,3.2and3.3,respectively.
3.1Preliminaries
We now present some consequences of the proof of[8]that we will later use in our proof of the upper bound.We use the functionˆf defined on any configuration C as follows:
ˆf(C)=(#left arrows−#right arrows)mod3.(4) The connection between the functions f andˆf is obvious,by(1):f(C)= #left arrows+2#right arrows=(#left arrows−#right arrows)+ 3#right arrows,hence
ˆf(C)≡f(C)(mod3).(5) We now discuss the properties of the functionˆf in a few lemmas and corol-laries.Throughout the discussion we refer to the cases according to Table1.
Lemma1:For any configuration C:
a.ˆf(C)=0iffx n−1=x0.
b.Any move of processor p i,1≤i≤n−2,does not change the functionˆf
(that is,∆ˆf=0).
c.p n−1is privileged according to case7iffˆf(C)=0and x n−2=x n−1.
d.p n−1is privileged according to case6iffˆf(C)=2and x n−2>x n−1.
Proof.
a.ˆf(C)=0iffthe difference between the number of left arrows and right
arrows is0modulo3.Since’<’denotes an increase by1from x i−1to x i, and’>’denotes a decrease by1,therefore this holds iffx n−1=x0.
b.Follows immediately from Table1and(5).
c.p n−1is privileged according to case7iffx0=x n−2=x n−1.By(a)(of
this lemma)this happens iffˆf(C)=0and x n−2=x n−1.
d.p n−1is privileged according to case6iffx0=x n−2>x n−1.It remains
to show thatˆf(C)=2.By considering the configuration C of thefirst n−1processors it follows by(a)thatˆf(C )=0,and thereforeˆf(C)=2.
2 Corollary1:After processor p n−1makes a move(case6or7),we get to a configuration C for whichˆf(C)=1.
Proof.Note that by Table1,move6decreases and move7increasesˆf by1. Hence after processor p n−1moves,ˆf(C)=1.
2 The next corollary follows from Corollary1and Table1;it is actually Lemma 0of[8].
Corollary2:Starting from any configuration,in any prefix of an execution the number of moves of p n−1is bounded by the number of moves of p0+1. The following lemma and corollary extend this property as follows:
Lemma2:Starting from any configuration,and during any execution,the following holds:
a.For any two successive moves of processor p n−1where the second move is
of case6,there is at least one move of processor p0between them.
b.For any two successive moves of processor p n−1where the second move is
of case7,there are at least two moves of processor p0between them.
Proof.By Corollary1,after thefirst of these two successive moves,ˆf becomes 1.
a.If the second move of processor p n−1is of case6,then between the two
moves of p n−1,ˆf had to change from1to2.Since moves1-5do not change ˆf,we conclude that processor p
0moved at least once between them.
b.If the second move is of case7,then between two moves of p n−1,ˆf had to
change from1to0.Since the only processor that can change the value ofˆf is p0,this means that p0had to move at least twice between them.
2 By Lemma2it follows that
Corollary3:Starting from any configuration,in any prefix of an execution the number of moves of case6plus twice the number of moves of case7of p n−1 is bounded by the number of moves of p0+1.
We summarize the properties of the functionsˆf in Table2.In this table we also include the function g discussed in Section3.2.In this table we denote the changes in the functionˆf(∆ˆf=ˆf(C2)−ˆf(C1))and g(∆g=g(C2)−g(C1)).
Case Processor C1C2∆g∆ˆf 0p0x0<x1x0>x1n−2+1 1p i x i−1>x i=x i+1x i−1=x i>x i+1−10 2p i x i−1=x i<x i+1x i−1<x i=x i+1−10 3p i x i−1>x i<x i+1x i−1=x i=x i+15−3n≤−10 4p i x i−1>x i>x i+1x i−1=x i<x i+13i−3n+5≤−10 5p i x i−1<x i<x i+1x i−1>x i=x i+1−3i+2≤−10 6p n−1x n−2>x n−1,ˆf=2x n−2<x n−1,ˆf=1n−2−1 7p n−1x n−2=x n−1,ˆf=0x n−2<x n−1,ˆf=12n−4+1
Table2
3.2The function g
We now introduce the function g.This function decreases by at least1during
each move of any intermediate processor p i(cases1-5).Unfortunately,moves
of processors p0and p n−1increase g.However,by combining results of Section
3.1and the properties of g we manage to derive the upper bound on the number
of moves to reach stabilization.
Given a configuration C=x0,x1,...,x n−1,we define the function g(C)as
follows:
g(C)=
1≤i≤n−1
x i−1<x i (n+i−3)+
1≤i≤n−1
x i−1>x i
(2n−i−3)(6)
Example.
•If in a configuration C,x0=x1=···=x n−1,then g(C)=0.(7)•If in a configuration C,x0=···=x i−1<x i=···=x n−1,then
g(C)=n+i−3.(8)•If in a configuration C,x0=···=x n−i−1>x n−i=···=x n−1,then g(C)=n+i−3.(9)•If in a configuration C,x0<x1=···=x n−2>x n−1,then
g(C)=2n−4.
•If in a configuration C,x0<x1<···<x n−1,then
g(C)=n−1
i=1
(n+i−3)=
3
2
(n−1)(n−2).
•If n is odd and in a configuration C,x0>···>x n−1
2<x n+1
2
<···<x n−1,then
g(C)=n−1
2
i=1
(2n−i−3)+
n−1
i=n+1
2
(n+i−3)=
7
4
n2−5n+
13
4
.(10)
•If n is even and in a configuration C,x0>···>x n
2<x n
2
+1
<···<x n−1,then
g(C)=
n
2
i=1
(2n−i−3)+
n−1
i=n
2
+1
(n+i−3)=
7
4
n2−5n+
12
4
.(11)
2
The changes in the function g in each of the eight possible moves are summarized in Table2.These changes can be obtained by using the ex-amples above.For example,for a move of case0we get by(8)and(9) that∆g=(2n−4)−(n−2)=n−2,and for a move of case5∆g= (2n−i−3)−(2n+2i−5)=2−3i≤−1.
Lemma3:For any configuration C,0≤g(C)≤7
4n2−5n+13
4
.
Proof.For any1≤i≤n−1,the following holds:
n−2≤n+i−3≤2(n−2),
n−2≤2n−i−3≤2(n−2).
This,together with(7),implies min C g(C)=0.
The maximal value of g is for a configuration C in which thefirst half of the arrows point to the right and all others point to the left.Formally,since n+i−3≤2n−i−3⇔2i≤n,then:
max C g(C)=
n−1
i=1
max(n+i−3,2n−i−3)=
n−1
i= n
2
+1
(n+i−3)+
n
2
i=1
(2n−i−3).
By(10)and(11)we conclude max C g(C)=7
4n2−5n+13
4
.
2
3.3Main contribution
We now turn to present our main result.Following the proof of[8]we know that starting from any initial configuration the algorithm will get to a legitimate configuration infinite time.We are now ready to measure this time in the following theorem.
Theorem1:Assume the system starts from an initial configuration C,and that an execution of Dijkstra’s algorithm gets into a legitimate configuration in T moves.If within thesefirst T moves there are exactly x,y and z moves of cases0,6and7,respectively(the cases refer to Table2),then
T≤g(C)+x(n−1)+y(n−1)+z(2n−3).
Proof.According to Table2,each move of any intermediate processor p i de-creases function g at least by1(cases1-5)and each of the x,y,z moves of case 0,6,7increase the function g by n−2,n−2,2n−4,respectively.
Therefore the total number of moves performed by all the intermediate processors p i is bounded by g(C)+x(n−2)+y(n−2)+z(2n−4).This
is true since otherwise we’ll get into a configuration C with g(C )<0,which contradicts Lemma3.
Since we assumed that the total number of moves performed by p0and p n−1 is exactly x+y+z,it follows that
T≤g(C)+x(n−2)+y(n−2)+z(2n−4)+(x+y+z)= =g(C)+x(n−1)+y(n−1)+z(2n−3).
2 In the discussion in Section2we mentioned that the number of phases is bounded by2(n−1),and therefore x≤2(n−1).In addition,by Corollary2 it follows that y+z≤x+1.Therefore we get
T≤g(C)+x(n−1)+y(n−1)+z(2n−3)≤
≤max g+x(n−1)+(y+z)(max(n−1,2n−3))=
=max g+x(n−1)+(x+1)(2n−3)=
=max g+x(n−1+2n−3)+(2n−3)≤
≤max g+2(n−1)(3n−4)+2n−3=
=max g+6n2−12n+5≤
≤7
4
n2−5n+
13
4
+6n2−12n+5=
=73
4
n2−17n+8
1
4
≤7
3
4
n2.
Note that we achieved this upper bound by combining the properties of the function g with the original proof of[8](in particular Corollary2)and without considering the functionˆf.
If we also use the functionˆf,then we can apply similar argument and use Corollary3.By this corollary we have y+2z≤x+1,and therefore:
T≤g(C)+x(n−1)+y(n−1)+z(2n−3)≤
≤max g+x(n−1)+n(y+2z)−(y+3z)≤
≤max g+x(n−1)+n(y+2z)≤
≤max g+x(n−1)+n(x+1)=
=max g+x(2n−1)+n≤
≤max g+2(n−1)(2n−1)+n=
=max g+4n2−5n+2≤
≤7
4
n2−5n+
13
4
+4n2−5n+2=
=53
4
n2−10n+5
1
4
≤5
3
4
n2.
A more careful analysis can be used to show an upper bound of4n2([5]). Therefore the following theorem holds:
Theorem2:Starting from any initial configuration,any execution of Dijk-stra’s algorithm gets into a legitimate configuration in at most O(n2)moves.
Recall that the analysis assumed a centralized scheduler.We argue that the same bound applies also for a distributed scheduler.This is the case since it can be shown that any move done concurrently by k>1processors can be simulated by a sequence of exactly k moves of individual processors(see[3]); this follows also from the fact that it is not possible that all n processors are privileged simultaneously.
References
[1]J Beauquier and O Debas.An optimal self-stabilizing algorithm for mutual
exclusion on bidirectional non uniform rings.In Proceedings of the Second Workshop on Self-Stabilizing Systems,pages17.1–17.13,1995.
[2]J Beauquier,C Johnen,and S Messika.Brief announcement:Computing
automatically the stabilization time against the worst and the best sched-ules.In20th International Symposium on Distributed Computing(DISC), Stockholm,Sweden,September18-20,pages543–547,2006.
[3]JE Burns,MG Gouda,and RE Miller.On relaxing interleaving assump-
tions.In Proceedings of the MCC Workshop on Self-Stabilizing Systems, MCC Technical Report No.STP-379-89,1989.
[4]EJH Chang,GH Gonnet,and D Rotem.On the costs of self-stabilization.
Information Processing Letters,24:311–316,1987.
[5]V Chernoy,M Shalom,and Shmuel Zaks.Better bounds for Dijkstra’s3rd
algorithm on mutual exclusion.In preparation.
[6]JA Cobb and MG Gouda.Stabilization of general loop-free routing.Jour-
nal of Parallel and Distributed Computing,62(5):922–944,2002.
[7]EW Dijkstra.Self stabilizing systems in spite of distributed -
munications of the Association of the Computing Machinery,17(11):643–644,1974.
[8]EW Dijkstra.A belated proof of self-stabilization.Distributed Computing,
1:5–6,1986.
[9]S Dolev.Self-Stabilization.MIT Press,2000.
[10]JLW Kessels.An exercise in proving self-stabilization with a variant func-
rmation Processing Letters,29:39–42,1988.
[11]HSM Kruijer.Self-stabilization(in spite of distributed control)in tree-
structured rmation Processing Letters,8:91–95,1979. [12]Y Nakaminami,H Kakugawa,and T Masuzawa.An advanced performance
analysis of self-stabilizing protocols:stabilization time with transient faults during convergence.In20th International Parallel and Distributed Process-ing Symposium(IPDPS2006),25-29April,Rhodes Island,Greece,2006.
[13]M Schneider.Self-stabilization.ACM Computing Surveys,25:45–67,1993.
[14]M Tchuente.Sur l’auto-stabilisation dans un r´e seau d’ordinateurs.RAIRO
Informatique Theoretique,15:47–66,1981.
[15]T Tsuchiya,Y Tokuda,and T puting the stabilization
times of self-stabilizing systems.IEICE Transactions on Fundamentals of Electronic Communications and Computer Sciences,E83A(11):2245–2252, 2000.
11。

相关文档
最新文档