# Turbo code

Post-publication activity

Curator: Sylvie Kerouédan

Turbo codes are error-correcting codes with performance close to the Shannon theoretical limit [SHA]. These codes have been invented at ENST Bretagne (now TELECOM Bretagne), France, in the beginning of the 90s [BER]. The encoder is formed by the parallel concatenation of two convolutional codes separated by an interleaver or permuter. An iterative process through the two corresponding decoders is used to decode the data received from the channel. Each elementary decoder passes to the other soft (probabilistic) information about each bit of the sequence to decode. This soft information, called extrinsic information, is updated at each iteration.

Figure 1: Concatenated encoder and decoder

## Precursor

In the 60’s, Forney [FOR] introduced the concept of concatenation to obtain coding and decoding schemes with high error-correction capacity. Typically, the inner encoder is a convolutional code and the inner decoder, using the Viterbi algorithm, is able to process soft information, that is, probabilities or logarithms of probabilities in practice. The outer encoder is a block encoder, typically a Reed-Solomon encoder and its associated decoder works with the binary decisions supplied by the inner decoder, as shown in Figure 1. As the former may deliver errors occurring in packets, the role of the deinterleaver is to spread these errors so as to make the outer decoding more efficient.

Though the minimum Hamming distance $$d_{min}$$ is very large, the performance of such concatenated schemes is not optimal, for two reasons. First, some amount of information is lost due to the inability of the inner decoder to provide the outer decoder with soft information. Second, if the outer decoder takes benefit from the work of the inner one, the converse is not true. The decoder operation is clearly dissymmetric.

To allow the inner decoder to produce soft decisions instead of binary decisions, modified versions of the Viterbi algorithm (SOVA: Soft-Output Viterbi algorithm) were proposed by Battail [BAT] and Hagenauer & Hoeher [HAG]. But soft inputs are not easy to handle in a Reed-Solomon decoder.

## The genesis of turbo codes

The invention of turbo codes finds its origin in the will to compensate for the dissymmetry of the concatenated decoder of Figure 1. To do this, the concept of feedback - a well-known technique in electronics – is implemented between the two component decoders ( Figure 2).

Figure 2: Decoding the concatenated code with feedback

The use of feedback requires the existence of Soft-In/Soft-Out (SISO) decoding algorithms for both component codes. As the SOVA algorithm was already available at the time of the invention, the adoption of convolutional codes appeared natural for both codes. For reasons of bandwidth efficiency, serial concatenation is replaced with parallel concatenation. Actually, parallel concatenation combining two codes with rates $$R_1$$ and $$R_2$$ gives a global rate $$R_p$$ equal to:

$R_p=\frac{R_1R_2}{1-(1-R_1)(1-R_2)}$

This rate is higher than that of a serially concatenated code, which is:

$R_s=R_1R_2$

for the same values of $$R_1$$ and $$R_2\ ,$$ and the lower these rates, the larger the difference. Thus, with the same performance of component codes, parallel concatenation offers a better global rate, but this advantage is lost when the rates come close to unity. Furthermore, in order to ensure a sufficiently large dmin for the concatenated code, classical non-systematic non-recursive convolutional codes ( Figure 3.a) have to be replaced with recursive systematic convolutional (RSC) codes ( Figure 3.b).

Figure 3: (A) Non-systematic non-recursive convolutional code with polynomials 13,15. (B) Recursive systematic convolutional (RSC) code with polynomials 13 (recursivity), 15 (parity)
Figure 4: A turbo code with component codes 13, 15

What distinguishes both codes is the minimum input weight $$w_{min}\ .$$ The input weight $$w$$ is the number of "1" in an input sequence. Suppose that the encoder of Figure 3.a is initialized in state 0, then fed with an all-zero sequence, except in one place (that is,$$w = 1$$). The encoder will retrieve state 0 as soon as the fourth 0 following the 1 will appear at the input. We have then $$w_{min} = 1\ .$$ In the same conditions, the encoder of Figure 3.b needs a second 1 to retrieve state 0. Without this second 1, this encoder will act as a pseudo-random generator, with respect to its output $$Y\ .$$ So, $$w_{min} = 2$$ and this property is very favourable regarding $$d_{min}$$ when parallel concatenation is implemented. A typical turbo code is depicted in Figure 2. The data are encoded both in the natural order and in a permuted order by two RSC codes $$C_1$$ and $$C_2$$ that issue parity bits $$Y_1$$ and $$Y_2\ .$$ In order to encode finite-length blocks of data, RSC encoding is terminated by tail bits or has tail-biting termination. The permutation has to be devised carefully because it has a strong impact on $$d_{min}\ .$$ The natural coding rate of a turbo code is $$R = 1/3$$ (three output bits for one input bit). To deal with higher coding rates, the parity bits are punctured. For instance, transmitting $$Y_1$$ and $$Y_2$$ alternately leads to $$R = 1/2\ .$$

The original turbo code [BER] uses a parallel concatenation of convolutional codes. But other schemes like serial concatenation of convolutional codes [BEN] or algebraic turbo codes [PYN] have since been studied. More recently, non-binary turbo codes have also been proposed [DOU].

## Turbo-decoding

Decoding the code of Figure 2 by a global approach is not possible, because of the astronomical number of states to consider. A joint probabilistic process by the decoders of $$C_1$$ and $$C_2\ ,$$ has to be elaborated. Because of latency constraints, this joint process is worked out in an iterative manner in a digital circuit. Turbo decoding relies on the following fundamental criterion:

when having several probabilistic machines work together on the estimation of a common set of symbols, all the machines have to give the same decision, with the same probability, about each symbol, as a single (global) decoder would.

To make the composite decoder satisfy this criterion, the structure of Figure 1 is adopted. The double loop enables both component decoders to benefit from the whole redundancy. The term turbo was given to this feedback construction with reference to the principle of the turbo-charged engine.

Figure 5: A turbo decoder

The components are SISO decoders, permutation ($$\pi$$) and inverse permutation ($$\pi^{-1}$$) memories. The node variables of the decoder are Logarithms of Likelihood Ratios (LLR). An LLR related to a particular binary datum d is defined as:

$LLR(d)=\ln\left(\frac{Pr(d=1)}{Pr(d=0)}\right)$

The role of a SISO decoder is to process an input LLR and, thanks to local redundancy (i.e. $$y1$$ for DEC1, $$y2$$ for DEC2), to try to improve it. The output LLR of a SISO decoder may be simply written as

$LLR_{out}(d)=LLR_{in}(d)+z(d)$

where $$z(d)$$ is the extrinsic information about $$d\ ,$$ provided by the decoder. If this works properly, $$z(d)$$ is most of the time negative if $$d = 0\ ,$$ and positive if $$d = 1\ .$$ The composite decoder is constructed in such a way that only extrinsic terms are passed by one component decoder to the other. The input LLR to a particular decoder is formed by the sum of two terms: the information symbols $$(x)$$ stemming from the channel and the extrinsic term $$(z)$$ provided by the other decoder, which serves as a priori information. The information symbols are common inputs to both decoders, which is why the extrinsic information must not contain them. In addition, the outgoing extrinsic information does not include the incoming extrinsic information, in order to cut down correlation effects in the loop. There are two families of SISO algorithms, those based on the SOVA [BAT][HAG], the others based on the MAP (also called BCJR or APP) algorithm [BAH] or its simplified versions. Turbo decoding is not optimal. This is because an iterative process has obviously to begin, during the first half-iteration, with only a part of the redundant information available (either $$y1$$ or $$y2$$). Fortunately, loss due to sub-optimality is small (less than $$0.5 dB$$).

## Applications of turbo codes

Figure 6: Applications of turbo codes

Table 1 summarizes normalized or proprietary applications of turbo codes, known to date. Most of these applications are detailed and commented on in [GRA].

## References

• [BAH] L.R. Bahl, J. Cocke, F. Jelinek and J. Raviv : Optimal decoding of linear codes for minimizing symbol error rate, IEEE Trans. Inform. Theory, IT-20, pp. 248-287, Mar. 1974.
• [BAT] G. Battail, “Pondération des symboles décodés par l’algorithme de Viterbi,” Annales des Télécommunications, vol. 42, N°1-2, pp. 31-38, Jan. 1987.
• [BEN] S. Benedetto and G. Montorsi, “Serial concatenation of block and convolutional codes, Electronics Letters, vol. 32, N° 10, pp. 887-888, May 1996.
• [BER] C. Berrou, A. Glavieux and P. Thitimajshima, “Near Shannon limit error-correcting coding and decoding: turbo-codes,” International Conference on Communications, ICC’93, Geneva, Switzerland, pp. 1064-70, May 1993.
• [DOU] C. Douillard and C. Berrou, Turbo Codes with Rate-m / (m + 1) Constituent Convolutional Codes IEEE Trans. Commun, Vol. 53, N° 10, pp. 1630-1638, Oct. 2005.
• [GRA] K. Gracie and M. H. Hamon, Turbo and turbo-like codes: principles and applications in telecommunications, Proc. of the IEEE, vol. 95, N° 5, pp. 1228-1254, June 2007.
• [HAG] J. Hagenauer and P. Hoeher, “A Viterbi algorithm with soft-decision outputs and its applications,” IEEE Global Communications Conference, Globecom ’89, Dallas, Texas, pp. 4711-17, Nov. 1989.
• [PYN] R. Pyndiah, A.Glavieux, A. Picart and S.Jacq, Near optimum decoding of product codes, in proc. of IEEE GLOBECOM '94 Conference, vol. 1/3, San Francisco, pp. 339-343, Nov-Dec. 1994.
• [SHA] C. E. Shannon, “Probability of error for optimal codes in Gaussian channel,” Bell Syst. Tech. Journal, vol. 38, pp. 611-656, 1959.

Internal references