首页 > 代码库 > Peterson和多线程版本
Peterson和多线程版本
Peterson‘s algorithm (AKA Peterson‘s solution) is a concurrent programming algorithm for mutual exclusion that allows two processes to share a single-use resource without conflict, using only shared memory for communication. It was formulated by Gary L. Peterson in 1981.[1] While Peterson‘s original formulation worked with only two processes, the algorithm can be generalized for more than two,[2] as shown below.
The algorithm
The algorithm uses two variables, flag and turn. A flag[n] value of true indicates that the process n wants to enter the critical section. Entrance to the critical section is granted for process P0 if P1 does not want to enter its critical section or if P1 has given priority to P0 by setting turn to 0.
The algorithm does satisfy the three essential criteria to solve the critical section problem, provided that changes to the variables turn
, flag[0]
, and flag[1]
propagate immediately and atomically. The while condition works even with preemption.[1]
The three criteria are mutual exclusion, progress, and bounded waiting.[3]
Mutual exclusion
P0 and P1 can never be in the critical section at the same time: If P0 is in its critical section, then flag[0] is true. In addition, either flag[1] is false (meaning P1 has left its critical section), or turn is 0 (meaning P1 is just now trying to enter the critical section, but graciously waiting), or P1 is at label P1_gate (trying to enter its critical section, after setting flag[1] to true but before setting turn to 0 and busy waiting). So if both processes are in their critical sections then we conclude that the state must satisfy flag[0] and flag[1] and turn = 0 and turn = 1. No state can satisfy both turn = 0 and turn = 1, so there can be no state where both processes are in their critical sections. (This recounts an argument that is made rigorous in.[4])
Progress
Progress is defined as the following: if no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder sections can participate in making the decision as to which process will enter its critical section next. This selection cannot be postponed indefinitely.[3] A process cannot immediately re-enter the critical section if the other process has set its flag to say that it would like to enter its critical section.
Bounded waiting
Bounded waiting means that "there exists a bound or limit on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted".[3] In Peterson‘s Algorithm, a process will not wait longer than one turn for entrance to the critical section: After giving priority to the other process, this process will run to completion and set its flag to 0, thereby allowing the other process to enter the critical section.
Filter algorithm: Peterson‘s algorithm for N processes
The filter algorithm generalizes Peterson‘s algorithm for N processes. It uses N different levels - each represents another ‘waiting room‘, before the critical section. Each level will allow at least one process to advance, while keeping one process in waiting.
// initialization level[N] = { -1 }; // current level of processes 0...N-1 waiting[N-1] = { -1 }; // the waiting process of each level 0...N-2 // code for process #i for(l = 0; l < N-1; ++l) { level[i] = l; waiting[l] = i; while(waiting[l] == i && (there exists k ≠ i, such that level[k] ≥ l)) { // busy wait } } // critical section level[i] = -1; // exit section
Note
When working at the hardware level, Peterson‘s algorithm is typically not needed to achieve atomic access. Some processors have special instructions, like test-and-set or compare-and-swap, that, by locking the memory bus, can be used to provide mutual exclusion in SMP systems.
Most modern CPUs reorder memory accesses to improve execution efficiency (see memory ordering for types of reordering allowed). Such processors invariably give some way to force ordering in a stream of memory accesses, typically through a memory barrier instruction. Implementation of Peterson‘s and related algorithms on processors which reorder memory accesses generally requires use of such operations to work correctly to keep sequential operations from happening in an incorrect order. Note that reordering of memory accesses can happen even on processors that don‘t reorder instructions (such as the PowerPC processor in the Xbox 360).
Most such CPUs also have some sort of guaranteed atomic operation, such as XCHG on x86 processors and load-link/store-conditional on Alpha, MIPS, PowerPC, and other architectures. These instructions are intended to provide a way to build synchronization primitives more efficiently than can be done with pure shared memory approaches.