Binary Codes Classification in Digital Electronics

Binary Codes in Digital Electronics:

Binary codes are combinations of 0s and 1s used to represent alphabets, numbers, symbols, etc. These codes are an essential part of any digital system such as computers, phones, and television. In short, binary codes are extensively used in communication and information systems. The main reasons for the wide use of binary codes include the following reasons:

In digital technology, only two switching states, viz., ON (0) and OFF (1), are available to represent alphabets and numerals. However, the bits 0 and 1 can represent only four (00, 01, 10, and 11) letters or numbers. It can be seen that these four combinations of bits are not sufficient to represent various languages and number systems. For example, English language requires a minimum of 26 different combinations of bits to represent its alphabets. It can be seen that a 5-bit binary coding scheme can produce 25 = 32 different combinations of 1s and 0s and hence such a 5-bit coding scheme can be used to represent the 26 letters from a to z in English language. In such a coding scheme, as an example, 00001 may be used to represent letter a, 00010 may be used to represent letter b, and so on.

By using error-detecting and correcting codes, errors in received bits produced by the noise existing in transmission channels can be detected. Once the errors are detected, they can be corrected using error-correcting codes. These error-detecting and correcting codes thus ensure exact (or approximately exact) reproduction of signals transmitted from distant channels.

Data-compression codes help to transmit large amounts of data in very short duration of time.

Data encrypting codes help to send secret data over transmission channels.

Coding helps to achieve maximum efficiency in signal transmission.

Signals/data from several transmitting sources can be sent through a single channel using multiplexing techniques. This ensures great economy in signal transmission.

Operations of modern smart instruments and equipments (such as smart phones) depend on binary codes for their smartness.

Artificial intelligence depends heavily on binary codes for their implementation.

Binary codes help in the artificial generation of speech and video signals.

Codes form the basis of many computers operations such as picture and sound enhancement, picture morphing, animation, etc.

Binary Codes Classification

At present, a large variety of binary codes are available for different applications in digital technology. Some of these codes are used for source coding (encoding of spoken and written languages for data transmission). Some others are used for channel encoding, which will help to detect and correct noise-generated errors in received signals. Data compression codes help to transmit huge amount of data in a short span of time. Similarly, data encryption codes help to achieve secrecy in data transmission. There exist several more binary codes for various other applications. In this section, we shall discuss the most commonly used binary source codes such as Morse code, Shannon-Fano code, Huffman code, Baudot code, ASCII code and EBDIC.

The first known binary source code is the Morse code, invented by Samuel Morse in 1837. It was used in his telegraph system to send messages over wired channels. Morse code is an uneven code in the sense that the codes used to represent letters and numerals differ in their length. For example letter A is represented by a dot followed by a dash. But letter B is represented by a dash and three dots. Baudot code, invented Emile Baudot in 1870, is an even-length code; all the codes in this scheme has the same length of 5 bits each. It can be seen that a 5-bit scheme has only 25 = 32 codes to represent a given language. As time passed on, demand for more binary combinations became a necessity and this led to the invention of the ASCII code and the EBCDIC. The classification of Binary Codes in Digital Electronics are as follows,

No comments:

Post a Comment