Improved Algorithm for Error Correction

Improved Algorithm for Error Correction

Wael Toghuj, Ghazi I. Alkhatib
DOI: 10.4018/978-1-4666-2157-2.ch015
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Digital communication systems are an important part of modern society, and they rely on computers and networks to achieve critical tasks. Critical tasks require systems with a high level of reliability that can provide continuous correct operations. This paper presents a new algorithm for data encoding and decoding using a two-dimensional code that can be implemented in digital communication systems, electronic memories (DRAMs and SRAMs), and web engineering. The developed algorithms correct three errors in codeword and detect four, reaching an acceptable performance level. The program that is based on these algorithms enables the modeling of error detection and correction processes, optimizes the redundancy of the code, monitors the decoding procedures, and defines the speed of execution. The performance of the derived code improves error detection and correction over the classical code and with less complexity. Several extensible applications of the algorithms are also given.
Chapter Preview
Top

From browsing the Web to launching a space rocket, today we are relying heavily on digital communication systems. A primary objective of any digital communication system is to transmit information at the maximum possible rate and receive it at the other end with minimum errors. Receiving information without errors becomes a critical task. As a result, finding good codes with practical decoders turns out to be the main challenge in achieving reliable transmission at rates close to the channel capacity (Bajcsy, Chong, Garr, Hunziker, & Kobayashi, 2001).

On the other hand, another application of digital communication techniques is storage systems. In this case the objective is not transmission “from here to there” but rather “from now to then.” These media have unique impairments, different from those in transmission media, but many of the same basic techniques apply (Barry, Lee, & Messerschmitt, 2004).

One of the most intractable sources of failures in computer has been the soft memory error: a random event that corrupts the value stored in a memory cell without damaging the cell itself.

Initially, the soft error problem gained widespread attention in the late 1970s as a memory data corruption issue, when DRAMs began to show signs of apparently random failures. Although the phenomenon was first noticed in DRAMs, SRAM memories and SRAM-based programmable logic devices are also subject to the same effects.

At ground level, cosmic radiation is about 95% neutrons and 5% protons. These particles can cause soft errors directly; they can also interact with atomic nuclei to produce troublesome short-range heavy ions. Cosmic rays cannot be eliminated at the source, and effective shielding would require meters of concrete or rocks. Soft Error Rates (SERs) are 5 times as high at 2600 feet as at sea level, and 10 times as high in Denver (5280 feet) as at sea level. “SRAM tested at 10,000 feet above sea level will record SERs that are 14 times the rate tested at sea level (Graham, 2002).

Changes in technology have significant impacts on error rates, but not always in predictable ways. For example, DRAM error rates were widely expected to increase as devices became smaller, while small scale DRAMs demonstrate a much better error resistance. One reason for this is that their smaller size allows less charge collection (Ziegler, 2000); another reason is that cell size has scaled faster than storage capacitance (Johnston, 2000a), so the capacitance ratio has actually increased (Johnston, 2000b). On the other hand, SOI (Silicon on Insulator) technology was expected to resist errors (Johnston, 2000a); however, it demonstrates an unexpected tendency toward large charge collection, which may dramatically increase error rates (Dodd, 2001).

To eliminate the soft memory errors that are induced by cosmic rays, memory manufacturers must either produce designs that can resist cosmic ray effects or else invent mechanisms to detect and correct the errors.

In mathematics, computer science and information theory, error correction and detection has great practical importance in maintaining data (information) integrity across noisy channels and storage media. Error correcting codes (ECC) are traditionally used in communications to deal with the corruption of transmitted data by channel noise. Extra information is added to the original data to enable the reconstruction of the original data transmitted. The encoded data, or codewords, are sent through the channel and decoded at the receiving end. During decoding the errors are detected and corrected if the amount of error is within the allowed, correctable range. This range depends on the extra information, i.e. parity bits, which were added during encoding.

In computer memories, saving data corresponds to sending it by noisy channel. Figure 1 shows the usage of ECC to correct errors in a memory system.

Figure 1.

Data flow in ECC added memory

978-1-4666-2157-2.ch015.f01

Complete Chapter List

Search this Book:
Reset