Viterbi decoding algorithm is designed to mitigate errors caused by these signal distortions. It works by estimating the most likely data sequence that the satellite transmitted based on the observed received signal and the knowledge of the encoder. This involves searching a large number of possible data sequences and selecting the one that has the highest likelihood of being the transmitted data.
In the case of multipath fading, Viterbi decoding algorithm can track and compensate for the different signal delays and interference levels caused by the multiple signal paths. By comparing the decoded data sequence with the original data sequence, it can also detect and correct any errors that may have occurred due to signal distortions.
The question is, why can’t use one of these codes instead of Convolutional Code and Viterbi algorithm in GPS Systems?
Based on the requirements of high data rate, channel characteristics, low computational complexity, real-time processing and Compatibility with modulation schemes, Convolutional codes with the Viterbi algorithm are well-suited to use in GPS systems.
4.1. Convolutional codes VS. LDPC codes
LDPC codes are known for their efficiency; however, decoding LDPC codes is more challenging compared to decoding Convolutional codes. Furthermore, LDPC codes require a larger memory capacity to store the parity check matrix used in the decoding process than Convolutional codes.
To comprehend the reasons behind the higher memory requirements for LDPC codes in comparison to Convolutional codes, we can observe the following:
In LDPC codes [
22,
23], the code matrix, denoted as the parity matrix
H, plays a vital role in establishing the relationship between the message and the codeword. The parity matrix consists of (N-K) rows and N columns, where N represents the block length and K represents the message length. This matrix is essential for both encoding and decoding procedures, as it guides the transformation of messages into codewords and vice versa. The memory required to store the code matrix is directly proportional to the number of elements in the matrix, which is (N-K)xN.
The decoding process for LDPC codes involves the utilization of the message passing algorithm. This decoding method employs the parity matrix H and specific rules to update the probabilities of each bit in the codeword. Iterations of these updates continue until a solution is reached or a maximum limit is attained. The memory needed for decoding the code using the message passing algorithm is proportional to the number of elements in the matrix H, multiplied by the number of iterations I, which yields ((N-K)xN) · I.
In Convolutional codes [
10,
24], the generator matrix
G is a (
k,n) matrix consisting of polynomial entries. The generator matrix
G defines the combination of input bits and shift register bits to generate the output bits. Each row in
G corresponds to one output bit, and each column represents one input bit or one shift register bit. For example, if
G=
, it implies that the encoder has
k = 2 output bits,
n = 1 input bit, and
m=2 shift register bits, and this because the generator matrix for a convolutional code is a matrix that describes how the input bits are encoded into output bits by using shift registers and modulo-2 additions. The number of rows of the generator matrix is equal to the number of output bits per input bit, which is denoted by
k. The number of columns of the generator matrix is equal to the number of input bits plus the number of shift register bits, which is denoted by
. The degree of each row of the generator matrix is equal to the number of feedback connections from the shift registers to that output bit [
25].
In previous example, the generator matrix has two rows, so k = 2. It has three columns, so = 3. Since the first row has degree 1 and the second row has degree 2, there are two feedback connections in total, so m = 2. Therefore, n = 1.
The memory required to store the generator matrix G of the Convolutional code is depend on the size of the matrix. In general, the size of the matrix is determined by the rate and constraint length of the code.
To provide an example, let’s consider a rate 1/2 Convolutional code with s=64 states (m = 6 bits) and LDPC code with rate and a block length of N = 2048 bits. In this case, the generator matrix would have 2 rows and 12 columns. This is because each input bit is mapped to two output bits, and the encoder uses the current input bit as well as the five previous input bits to generate each pair of output bits. Therefore, the generator matrix will be 2 x 7. This is because k = 2 (the number of output bits per input bit) and n = 1 (the number of input bits). The number of columns is , which is 1 + 6 = 7. The size of memory needed for saving this matrix in Convolutional code depends on the data type and the format of the matrix. For example, if the matrix is stored as a binary array of 0s and 1s, then each element would take 1 bit of memory. The total size of the matrix would be 2 x 7 = 14 bits.
Conversely, the memory required to store the parity matrix H of the LDPC code is proportional to (N-K)xN = 2097152 bits. Clearly, the LDPC code demands significantly more memory to store the code matrix compared to the Convolutional code. The block length is the same for LDCP and Convolutional code in this example, but in encoding operation in Convolutional code the memory required to store the generator matrix G of the convolutional code depends only on the number of memory elements m and not on the block length.
The memory requirements for decoding also depend on the decoding algorithm employed. For Viterbi decoding of Convolutional codes, the memory needed is proportional to s· N = 131072. The result s · N = 131072 comes from multiplying the number of states s=64 by the block length N = 2048. This is the memory required to store the survivor paths in the Viterbi decoding algorithm. Each survivor path is a sequence of N bits that represents a possible input to the Convolutional encoder. The Viterbi decoder keeps track of s survivor paths at each decoding step and chooses the one with the minimum path metric as the most likely input. So as we mentioned before, the block length is important in encoding operation and its not in encoding operation. In message passing decoding of LDPC codes, the memory required [
26] is proportional to (N-K)xN · I = 209715200. The result (N-K)xN · I = 209715200 comes from multiplying the number of parity bits (N-K) = 1024 by the block length N = 2048 by the number of iterations I = 100. This is the memory required to store the messages in the message passing decoding algorithm. Each message is a vector of N real numbers that represents the likelihood ratio of a bit being 0 or 1. The message passing decoder exchanges messages between the variable nodes and the check nodes in each iteration and updates the posterior probabilities of the bits. The number of iterations I = 100 is an arbitrary choice. It depends on the desired decoding performance and complexity. The more iterations, the better the performance, but also the higher the complexity. There is no fixed rule for choosing the number of iterations, but some common values are 10, 50, or 100.
Through this comparison, it becomes evident that the memory necessary for decoding LDPC codes is considerably higher than that required for Convolutional codes. Thus, the limited memory and in addition to other factors such as processing power available in GPS receivers makes it difficult to use LDPC codes. On the other hand, the Viterbi algorithm used with Convolutional codes provides less memory requirment and a low complexity alternative [
27] and good balance of error correction performance and computational efficiency for GPS receivers Compared with LDPC codes. These are number of the advantages of using Convolutional codes over LDPC codes in GPS systems, while there are many more advantages to consider.
4.2. Convolutional codes VS. BCH codes
BCH codes are a class of cyclic error-correcting codes that are constructed using polynomials over a finite field. They have precise control over the number of symbol errors correctable by the code and can be decoded easily using an algebraic method known as syndrome decoding. They are used in various applications such as satellite communications, DVDs, QR codes and quantum-resistant cryptography.
One way to compare the complexity of BCH codes and Convolutional codes is to look at their encoding and decoding algorithms.
In order to explicate the intricacies involved, we shall consider a (15, 7) BCH code and a (2, 1, 2) Convolutional code as illustrative examples. These codes have been chosen due to their comparable error correcting capabilities. This is solely a simplified illustration, and the actual complexity may vary depending on the implementation and the code parameters.
BCH codes: They are a class of cyclic error-correcting codes that are constructed using polynomials over a finite field. They have precise control over the number of symbol errors correctable by the code and can be decoded easily using an algebraic method known as syndrome decoding. They are used in various applications such as satellite communications, DVDs, QR codes and quantum-resistant cryptography.
BCH encoding: It is the process of finding a polynomial that is a multiple of the generator polynomial, which is defined as the least common multiple of the minimal polynomials of some consecutive powers of a primitive element. The encoding can be either systematic or non-systematic, depending on how the message is embedded in the codeword polynomial.
To encode a message using BCH code, need to choose a finite field GF(q), a code length n, and a design distance d. Then need to find a generator polynomial g(x) that has ∝,
,…,
as roots, where ∝ is a primitive element of GF(
) and n =
- 1. Can be used the table of minimal polynomials in [
28] to find g(x) as the least common multiple of some of them. Then multiply the message polynomial m(x) by g(x) to get the codeword polynomial c(x).
BCH decoding: It is the process of finding and correcting the errors in the received codeword using the syndromes, which are formed by evaluating the received polynomial at some powers of a primitive element. There are several algorithms for decoding BCH codes, such as Peterson’s algorithm, Forney algorithm, Sugiyama’s algorithm and Euclidean algorithm. They involve finding the error locator polynomial and the error values, and then subtracting them from the received codeword to recover the original one.
To decode a received word r(x) using BCH code, need to calculate the syndromes =r() for j = 1, 2, …, 2t, where t is the number of errors that can be corrected. Then use an algorithm such as Peterson’s algorithm or Euclidean algorithm to find the error locator polynomial that has the error locations as roots. Then use another algorithm such as Forney’s algorithm to find the error values at those locations. Finally, correct the errors by subtracting them from the received word to get the original codeword.
For example, suppose we want to encode the message [1 0 1 0] using a binary BCH code with n = 15 and d = 7, it is recommended to choose GF(2) as the finite field and use the reducing polynomial
+z+1 and the primitive element
=z as in [
28]. Then can find the generator polynomial g(x) as:
Please note that the lcm, which stands for least common multiple, is responsible for ensuring that g(x) contains all the necessary roots required for BCH encoding.
Now, The message polynomial is m(x) =+x, so the codeword polynomial is
c(x) = m(x)g(x) = (+x)(++++1) =+++++x
The codeword is [0 0 0 0 1 0 1 1 1 0 1 1 0 0 0].
Suppose receive the word r(x) = [0 0 0 0 1 0 1 0 1 0 1 1 0 0 0], which has two errors at positions = 8 and = 10. The syndromes can be computed as follows:
= = ∝ = = ∝ = = ∝ = = ∝ = = ∝ = = ∝
Peterson’s algorithm can be used to find the error locator polynomial and as following:
while Forney’s algorithm can be employed to find the error values, and as follwoing:
To fix the errors, you can subtract them from the received word, and as following:
This is the same as the original codeword.
The complexity and the number of operations required for encoding and decoding BCH codes depend on the parameters of the code, such as the code length, the design distance, and the number of errors. Here are some general formulas for estimating the complexity:
For encoding, the complexity is proportional to the degree of the generator polynomial g(x), which is at most (), where m is the extension degree of the finite field and t is the number of errors that can be corrected. The number of operations is O() for binary BCH codes and O( log q) for non-binary BCH codes, where q is the size of the finite field.
For decoding, the complexity is proportional to the number of syndromes that need to be computed and processed, which is () for binary BCH codes and t for non-binary BCH codes. The number of operations is O( log n) for binary BCH codes and O( log q) for non-binary BCH codes, where N is the code length.
The given example is a binary BCH code because the finite field used is GF(2), which has only two elements (0 and 1).
Our objective entails the determination of the computational workload involved in both encoding and decoding a message within the provided example. The ultimate goal is to assess the inherent complexity embedded within the BCH code and subsequently conduct a comparative analysis between such complexity and the counterpart system comprised of Convolutional codes.
To know the number of encoding operation in BCH in the given example, we need to multiply the message polynomial m(x) by the generator polynomial g(x) to get the codeword polynomial c(x). The degree of g(x) is 8, which is equal to (), where and . The number of operations is O() = O() = for binary BCH codes. This means that you need at most 16 operations in GF(2) to encode the message.
For decoding, need to calculate the syndromes, find the error locator polynomial, find the error values, and correct the errors. The number of operations depends on the number of errors and the algorithm used. For binary BCH codes, the number of operations is O() for calculating the syndromes, O() for finding the error locator polynomial using Peterson’s algorithm or O() using Euclidean algorithm, O() for finding the error values using Forney’s algorithm, and O() for correcting the errors. The total number of operations is O() for Peterson’s algorithm or O() for Euclidean algorithm. In this example, and , so the total number of operations is at most 128 for Peterson’s algorithm or 64 for Euclidean algorithm(note, this is the maximum number of operation, this means that the number of operation may be less than these numbers).
On the other hand, to encode the message [1 0 1 0] using a convolutional code (2, 1, 2), it is necessary to utilize the State Transition Table and the encoder circuit illustrated in
Figure 3. Subsequently, the following operations need to be performed:
Set all memory registers to zero.
Feed the input bits one by one and shift the register values to the right.
Calculate the output bits using the generator polynomials = (1,1,1) and = (1,0,1).
Append two zero bits at the end of the message to flush the encoder.
For each input bit, we need to perform two additions and no multiplications. The total number of operations for encoding a message of length N plus m (m for the number of zeros that need to flush the shift registers), therefore the total number of operations for encoding a message of length N is 2N + m.
Figure 9, shows the encoding operation using Trellis diagram :
Figure 10, shows the decoding operation using Viterbi algorithm:
Figure 9.
Message encoding using convolutional code (2,1,2).
Figure 9.
Message encoding using convolutional code (2,1,2).
Figure 10.
Codeword decoding operation using Viterbi Algorithm.
Figure 10.
Codeword decoding operation using Viterbi Algorithm.
For encoding a message of length N = 4, we need to perform 2N+2 = 10 additions and no multiplications. For decoding a message of length N = 4+2 = 6 with four states in the trellis diagram, we need to perform approximately =96 additions and 48 comparisons (there are 8 possible transitions per time step and 6 time steps for a message of length 4+2=6). Therefore, there are =48 comparisons for these time steps (note, that the total of comparisons is less than 48, because the comparisons number in time step 1 and 2 is less than 8).
Another way to compare the complexity of BCH codes and Convolutional codes is to look at their codeword lengths. BCH codes tend to produce longer codewords than Convolutional codes for the same message length and error-correcting capability. This is because BCH codes have a fixed codeword length that depends on the size of the finite field, while Convolutional codes have a variable codeword length that depends on the constraint length and the code rate. For example, a (63,51) BCH code can correct up to 2 errors per codeword, while a (3,1) Convolutional code with constraint length 7 can correct up to 5 errors per codeword. However, the BCH code has a codeword length of 63 bits, while the Convolutional code has a codeword length of 21 bits for a message length of 7 bits [
29,
30].
When comparing the encoding and decoding operations, it is evident that Convolutional codes require fewer steps compared to BCH codes. Additionally, the mathematical operations involved in BCH codes are more complicated than in Convolutional codes. However, Convolutional codes still face a fundamental issue known as exponential complexity. Despite this drawback, Convolutional codes are considered the most appropriate choice for GPS communication systems.
Overall, the high complexity associated with BCH codes makes them unsuitable for implementation in GPS systems and real-world applications. Consequently, Convolutional codes with Viterbi decoding, which are less computationally complex, are preferred over BCH codes..
4.3. Convolutional codes VS. Turbo codes
Turbo codes are a type of forward error correction codes that use two or more Convolutional codes in parallel with an interleaver [
19]. They have high performance and can achieve near-Shannon limit error correction capacity [
31]. However, they also have some drawbacks, and two of them are:
First Drawback:Turbo codes require complex encoding and decoding algorithms that require significant computing power.
The encoding process involves two or more Convolutional encoders and an interleaver, while the decoding process involves soft-input soft-output decoders that use either the log-MAP or the max-log-MAP algorithm. Numerous studies have focused on addressing the issue of power consumption caused by Turbo codes. For instance, a study conducted by Oliver Yuk-Hang Leung, Chung-Wai Yue, Chi-ying Tsui, and Roger S. Cheng at Hong Kong University of Science and Technology [
32], examined the complexities and high power consumption associated with decoding Turbo codes in receivers. The authors presented potential solutions to this problem in their research.
Second Drawback:The use of a turbo code may increase the processing time of the GPS receiver, which may result in a delay in receiving and processing GPS signal data.
This is because the decoding algorithm requires multiple iterations to converge to a reliable solution, and each iteration involves matrix operations and logarithmic calculations, and this because the computational complexity involved in Turbo codes leads to increased power consumption. In computer science and engineering, it is widely recognized that there is a trade-off between complexity, power, and time. More complex codes or models require additional resources such as memory, processing power, and energy for storage and operation, resulting in longer execution times and higher power consumption. The complexity of Turbo codes compared to Convolutional codes has been explored in various studies, including a research conducted by David J.C. MacKay [
33].
Thus, GPS systems often use Convolutional codes due to their lower computational complexity (compared to other codes types using satellite communications systems) and faster decoding time, yet they are effective for error correction.
In addition to information mentioned, one of the most Reasons that makes Convolutional codes suitable for GPS Systems is that block codes are typically slower than Convolutional codes for the following reasons:
Block codes require larger block sizes to achieve the same level of error protection as convolutional codes. The larger block size means more bits need to be processed, which can slow down the system.
Block codes require more complex encoding and decoding algorithms than Convolutional codes, which can also slow down the system. Convolutional codes use a shift register and some XOR gates to generate the parity bits, which is a simpler process than the matrix multiplication required for block codes.
Error detection and correction are more efficient in Convolutional codes than block codes. Convolutional codes can detect and correct errors in real-time, while block codes require the entire block to be received before errors can be corrected.
Overall, while block codes can offer more robust error correction, they require more complex algorithms and larger block sizes, which can make them slower than convolutional codes in some applications like GPS systems. Therefore, BCH, LDPC, and turbo codes are not suitable for GPS systems. Also, difficulties in LDPC, BCH, Turbo codes make them require:
More memory to store the parity check matrix used for decoding;
Significant processing power, which can be problematic for low-power and low-cost GPS devices that are commonly used in commercial applications;
Slow encoding and decoding operation.
On the other hand, Convolutional codes and the Viterbi algorithm can offer a favorable balance of computational efficiency and error correction performance specifically suited for Global Positioning System (GPS) receivers.