As soon as the majority of his followers have confirmed that the entry has been replicated, the Leader applies the entry to his local distributors and the demand is deemed established. [1] [4] This event also fixes all previous entries in the Leader`s minutes. As soon as a subscriber learns that a protocol entry has been established, they apply the input to their local state computer. This ensures protocol consistency across all servers through the cluster, ensuring compliance with the protocol comparison security rule. Timing is essential for Raft to choose and keep a constant guide over time in order to have perfect cluster availability. Stability is guaranteed by compliance with the timing requirements of the algorithm: this rule is guaranteed by a simple limitation: a candidate cannot win an election, unless his protocol contains all the established entries. To be elected, a candidate must contact a majority of the cluster and, given the rules for executing protocols, this means that each established entry is available on at least one of the servers that contact the candidates. Raft uses a randomized voting timeout to ensure that issues are resolved quickly with split voices. This should reduce the likelihood of a split vote, as servers do not become candidates at the same time: a single server will take a break, win elections, then become a leader, and send heartbeat messages to other servers before one of the subscribers can become a candidate. [1] Pease M, Shostak R, Lamport L: agreement in case of error. J Assoc Comput Machinery 27: 228-234 (1980) Cristian F, Aghili H, Strong R, Dolev D: Atomic broadcasts from simple message to Byzantine agreement. Proc 15th International Symposium on Fault-Tolerant Computing, Ann Arbor, Michigan, p. 200-206, June 1985.

A revised version appears as IBM Tech Rep RJ5244 The typical number of these values can be from 0.5 ms to 20 ms for broadcastTime, which means that the programmer puts the electionTimeout somewhere between 10 and 500 ms. It can take several weeks or months between individual server failures, which means that the values are correct for a stable cluster. In the event of a Leader crash, the protocols may remain inconsistent, as some former Leader protocols are not fully replicated by the cluster. The new leader will then manage the inconsistency by forcing supporters to duplicate his own protocol. To do this, the director compares his protocol with the subscriber`s protocol for each of his subscribers, finds the last entry where he accepts, then deletes all entries that arrive after that critical entry in the subscriber`s log, and replaces it with his own log entries. This mechanism restores the consistency of the protocol to an error-prone cluster. Mohan C, Strong R, Finkelstein S: Distributed commit and recovery method using bizantin chord in processor clusters. Proc 2nd ACM Symposium on Principles of Distributed Computing, Montreal, Québec, Canada, pp 89-103, August 1983 Afek Y, Gafni E: Time and message bounds for election in synchronous and asynchronous complete networks. SIAM J Comput 20: 376–394 (1991) Abu-Amara HH: Fault-tolerant distributed algorithm for election in complete networks.

. . .