In coding theory, a linear code is an error-correcting code for which any linear combination of codewords is also a codeword. Linear codes are traditionally partitioned into block codes and convolutional codes, although turbo codes can be seen as a hybrid of these two types.[1] Linear codes allow for more efficient encoding and decoding algorithms than other codes (cf. syndrome decoding).[citation needed]

Linear codes are used in forward error correction and are applied in methods for transmitting symbols (e.g., bits) on a communications channel so that, if errors occur in the communication, some errors can be corrected or detected by the recipient of a message block. The codewords in a linear block code are blocks of symbols that are encoded using more symbols than the original value to be sent.[2] A linear code of length n transmits blocks containing n symbols. For example, the [7,4,3] Hamming code is a linear binary code which represents 4-bit messages using 7-bit codewords. Two distinct codewords differ in at least three bits. As a consequence, up to two errors per codeword can be detected while a single error can be corrected.[3] This code contains 24=16 codewords.

Definition and parameters

edit

A linear code of length n and dimension k is a linear subspace C with dimension k of the vector space   where   is the finite field with q elements. Such a code is called a q-ary code. If q = 2 or q = 3, the code is described as a binary code, or a ternary code respectively. The vectors in C are called codewords. The size of a code is the number of codewords and equals qk.

The weight of a codeword is the number of its elements that are nonzero and the distance between two codewords is the Hamming distance between them, that is, the number of elements in which they differ. The distance d of the linear code is the minimum weight of its nonzero codewords, or equivalently, the minimum distance between distinct codewords. A linear code of length n, dimension k, and distance d is called an [n,k,d] code (or, more precisely,   code).

We want to give   the standard basis because each coordinate represents a "bit" that is transmitted across a "noisy channel" with some small probability of transmission error (a binary symmetric channel). If some other basis is used then this model cannot be used and the Hamming metric does not measure the number of errors in transmission, as we want it to.

Generator and check matrices

edit

As a linear subspace of  , the entire code C (which may be very large) may be represented as the span of a set of   codewords (known as a basis in linear algebra). These basis codewords are often collated in the rows of a matrix G known as a generating matrix for the code C. When G has the block matrix form  , where   denotes the   identity matrix and P is some   matrix, then we say G is in standard form.

A matrix H representing a linear function   whose kernel is C is called a check matrix of C (or sometimes a parity check matrix). Equivalently, H is a matrix whose null space is C. If C is a code with a generating matrix G in standard form,  , then   is a check matrix for C. The code generated by H is called the dual code of C. It can be verified that G is a   matrix, while H is a   matrix.

Linearity guarantees that the minimum Hamming distance d between a codeword c0 and any of the other codewords c ≠ c0 is independent of c0. This follows from the property that the difference c − c0 of two codewords in C is also a codeword (i.e., an element of the subspace C), and the property that d(c, c0) = d(c − c0, 0). These properties imply that

 

In other words, in order to find out the minimum distance between the codewords of a linear code, one would only need to look at the non-zero codewords. The non-zero codeword with the smallest weight has then the minimum distance to the zero codeword, and hence determines the minimum distance of the code.

The distance d of a linear code C also equals the minimum number of linearly dependent columns of the check matrix H.

Proof: Because  , which is equivalent to  , where   is the   column of  . Remove those items with  , those   with   are linearly dependent. Therefore,   is at least the minimum number of linearly dependent columns. On another hand, consider the minimum set of linearly dependent columns   where   is the column index set.  . Now consider the vector   such that   if  . Note   because   . Therefore, we have  , which is the minimum number of linearly dependent columns in  . The claimed property is therefore proven.

Example: Hamming codes

edit

As the first class of linear codes developed for error correction purpose, Hamming codes have been widely used in digital communication systems. For any positive integer  , there exists a   Hamming code. Since  , this Hamming code can correct a 1-bit error.

Example : The linear block code with the following generator matrix and parity check matrix is a   Hamming code.

   

Example: Hadamard codes

edit

Hadamard code is a   linear code and is capable of correcting many errors. Hadamard code could be constructed column by column : the   column is the bits of the binary representation of integer  , as shown in the following example. Hadamard code has minimum distance   and therefore can correct   errors.

Example: The linear block code with the following generator matrix is a   Hadamard code:  .

Hadamard code is a special case of Reed–Muller code. If we take the first column (the all-zero column) out from  , we get   simplex code, which is the dual code of Hamming code.

Nearest neighbor algorithm

edit

The parameter d is closely related to the error correcting ability of the code. The following construction/algorithm illustrates this (called the nearest neighbor decoding algorithm):

Input: A received vector v in   .

Output: A codeword   in   closest to  , if any.

  • Starting with  , repeat the following two steps.
  • Enumerate the elements of the ball of (Hamming) radius   around the received word  , denoted  .
    • For each   in  , check if   in  . If so, return   as the solution.
  • Increment  . Fail only when   so enumeration is complete and no solution has been found.

We say that a linear   is  -error correcting if there is at most one codeword in  , for each   in  .

edit

Codes in general are often denoted by the letter C, and a code of length n and of rank k (i.e., having n code words in its basis and k rows in its generating matrix) is generally referred to as an (nk) code. Linear block codes are frequently denoted as [nkd] codes, where d refers to the code's minimum Hamming distance between any two code words.

(The [nkd] notation should not be confused with the (nMd) notation used to denote a non-linear code of length n, size M (i.e., having M code words), and minimum Hamming distance d.)

Singleton bound

edit

Lemma (Singleton bound): Every linear [n,k,d] code C satisfies  .

A code C whose parameters satisfy k+d=n+1 is called maximum distance separable or MDS. Such codes, when they exist, are in some sense best possible.

If C1 and C2 are two codes of length n and if there is a permutation p in the symmetric group Sn for which (c1,...,cn) in C1 if and only if (cp(1),...,cp(n)) in C2, then we say C1 and C2 are permutation equivalent. In more generality, if there is an   monomial matrix   which sends C1 isomorphically to C2 then we say C1 and C2 are equivalent.

Lemma: Any linear code is permutation equivalent to a code which is in standard form.

Bonisoli's theorem

edit

A code is defined to be equidistant if and only if there exists some constant d such that the distance between any two of the code's distinct codewords is equal to d.[4] In 1984 Arrigo Bonisoli determined the structure of linear one-weight codes over finite fields and proved that every equidistant linear code is a sequence of dual Hamming codes.[5]

Examples

edit

Some examples of linear codes include:

Generalization

edit

Hamming spaces over non-field alphabets have also been considered, especially over finite rings, most notably Galois rings over Z4. This gives rise to modules instead of vector spaces and ring-linear codes (identified with submodules) instead of linear codes. The typical metric used in this case the Lee distance. There exist a Gray isometry between   (i.e. GF(22m)) with the Hamming distance and   (also denoted as GR(4,m)) with the Lee distance; its main attraction is that it establishes a correspondence between some "good" codes that are not linear over   as images of ring-linear codes from  .[6][7][8]

Some authors have referred to such codes over rings simply as linear codes as well.[9]

See also

edit

References

edit
  1. ^ William E. Ryan and Shu Lin (2009). Channel Codes: Classical and Modern. Cambridge University Press. p. 4. ISBN 978-0-521-84868-8.
  2. ^ MacKay, David, J.C. (2003). Information Theory, Inference, and Learning Algorithms (PDF). Cambridge University Press. p. 9. Bibcode:2003itil.book.....M. ISBN 9780521642989. In a linear block code, the extra   bits are linear functions of the original   bits; these extra bits are called parity-check bits{{cite book}}: CS1 maint: multiple names: authors list (link)
  3. ^ Thomas M. Cover and Joy A. Thomas (1991). Elements of Information Theory. John Wiley & Sons, Inc. pp. 210–211. ISBN 978-0-471-06259-2.
  4. ^ Etzion, Tuvi; Raviv, Netanel (2013). "Equidistant codes in the Grassmannian". arXiv:1308.6231 [math.CO].
  5. ^ Bonisoli, A. (1984). "Every equidistant linear code is a sequence of dual Hamming codes". Ars Combinatoria. 18: 181–186.
  6. ^ Marcus Greferath (2009). "An Introduction to Ring-Linear Coding Theory". In Massimiliano Sala; Teo Mora; Ludovic Perret; Shojiro Sakata; Carlo Traverso (eds.). Gröbner Bases, Coding, and Cryptography. Springer Science & Business Media. ISBN 978-3-540-93806-4.
  7. ^ "Encyclopedia of Mathematics". www.encyclopediaofmath.org.
  8. ^ J.H. van Lint (1999). Introduction to Coding Theory (3rd ed.). Springer. Chapter 8: Codes over ℤ4. ISBN 978-3-540-64133-9.
  9. ^ S.T. Dougherty; J.-L. Kim; P. Sole (2015). "Open Problems in Coding Theory". In Steven Dougherty; Alberto Facchini; Andre Gerard Leroy; Edmund Puczylowski; Patrick Sole (eds.). Noncommutative Rings and Their Applications. American Mathematical Soc. p. 80. ISBN 978-1-4704-1032-2.

Bibliography

edit
  • J. F. Humphreys; M. Y. Prest (2004). Numbers, Groups and Codes (2nd ed.). Cambridge University Press. ISBN 978-0-511-19420-7. Chapter 5 contains a more gentle introduction (than this article) to the subject of linear codes.
edit