JISE


  [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20]


Journal of Information Science and Engineering, Vol. 24 No. 1, pp. 159-174


Improving the Efficacy of a Termination Detection Algorithm


Sathya Peri and Neeraj Mittal
Department of Computer Science 
The University of Texas at Dallas 
Richardson, TX 75083, U.S.A. 
E-mail: {sathya.p@student.utdallas.edu; neerajm@utdallas.edu}


    An important problem in distributed systems is to detect termination of a distributed computation. A distributed computation is said to have terminated when all processes have become passive and all channels have become empty. We focus on two attributes of a termination detection algorithm. First, whether the distributed computation starts from a single process or from multiple processes: diffusing computation versus non-diffusing computation. Second, whether the detection algorithm should be initiated along with the computation or can be initiated any time after the computation has started: simultaneous initiation versus delayed initiation. We show that any termination detection algorithm for a diffusing computation can be transformed into a termination detection algorithm for a non-diffusing computation. We also demonstrate that any termination detection algorithm for simultaneous initiation can be transformed into a termination detection algorithm for delayed initiation. We prove the correctness of our transformations, and show that our transformations have only a small impact on the performance of the given termination detection algorithm.


Keywords: distributed system, termination detection, algorithm transformation, diffusing and non-diffusing computations, simultaneous and delayed initiations, worst-case and average-case message complexities, fault-tolerance

  Retrieve PDF document (JISE_200801_11.pdf)