RF Communications Channel Tutorial


Shannon-Hartley Theorem: In the 1940's Claude Shannon developed a theorem that describes the maximum error free information rate that can be transmitted over a communication channel in the presence of noise. In addition to the development of the maximum information rate theorem, Claude Shannon formulated a complete theory of information and its transmission. In creating his information theory, Claude Shannon built upon the fundamental ideas of information transmission that had been developed by Harry Nyquist and Ralph Hartley in the late 1920s. In his work on telegraph, Nyquist determined that the maximum number of independent pulses that could be put through a channel per unit time is limited to twice the bandwidth of the channel. Hartley greatly improved on Nyquist's understanding of channel capacity by formulating a way to quantify the information within a pulse. Hartley realized that the maximum number of distinguishable pulse levels that can be transmitted and received without error for each pulse is limited by the dynamic range of the signal amplitude and the precision with which the receiver can distinguish amplitude levels. Hartley then created an equation that combined Nyquist's limit on pulses/second with the pulse's dynamic range limit. The equation as shown below, describes the maximum possible amount of information, or data rate, that can be carried in a noise free communications channel.


R ≤ 2B log2(M)


In Harley's equation, M is the number of levels in a pulse, B is the bandwidth of the channel, and the result R is the data rate in bits per second. By adding his understanding of how noise affected the available levels in a pulse, Shannon was able to create an equation that described the channels data rate in the presence of noise. The Shannon-Hartley theorem for Gaussian noise is as follows:


C = B log2(1 + S/N)


In Shannon's equation, S is the average received signal power over the bandwidth B, expressed in watts. N is the average noise power over the bandwidth B, expressed in watts. The result C is the maximum channel capacity, expressed in bits per second. Shannon developed his equation as part of a greater work that formulates a complete theory of information and it's transmission.


Information Theory: The RF communications channel is an abstract concept that is used to describe the pathway that is available or has been allocated for the Radio Frequency transmission of information. Typical communications channels exist in conductors such as a coaxial cable or in the path through the air between two antennas. Communications channels are used for the transfer of information from one point to another. If one wishes to make the best use of an available communications channel it is important to first understand the nature of the information needed to complete the desired communication message. Information theory provides us a useful method of understanding information's most fundament size and shape. According to information theory, all information can be represented as some discrete number of symbols. A symbol can be thought of as being the smallest possible representation of information and has only two states. An example of a single symbol of information is the state of a light in your home. If the light is “on” the symbol for the light's state could be represented as a “1” and if the light is “off” the light's state can be represented as a “0”. It is important to note that only one symbol of information is required to represent the state of the light, if that state has not changed for all time in the past and will not change at any time in the future. Knowing the state of a light that never changes is not terribly useful but it does provide a good example of what it means to communicate a single symbol of information from one point to another. Knowing the state of the light as it changes over time is far more useful. In order to transfer the knowledge of the state of the light we must add one symbol for each change of state. So for the simplest example of a light that was off for all time in the past and then turned on at some point in time and remains on forever we can represent the state of the light with just two symbols. For the more general case of the light that we know changes states a number of times N, we can represent the information of the state of the light with N symbols. It is important to point out that by definition, information theory says that we know the state of the light in our home. Information theory merely tells us minimum number of symbols it will take to represent a group of known states. It does not tell us how to reduce a set of samples to the minimum number of states required to exactly represent the state of the light example. We should not confuse the representation of the known state of the light with the sampling of the unknown state of the light to determine it's state. A more general example, that demonstrates the difference between known information and the sampling of an unknown state, is the typical home video camera. The sampling of the unknown image from a typical home video camera is made at a 60 frame per second rate. With a resolution of 480 lines and 640 pixels per line the total samples per frame needed is simply 480 x 640 or 307,200 samples. Since each sampled pixel is actually a color, we will assume that the camera represents the color as the amplitudes of the three primary colors red, green, and blue (Note that this is a simplification of how typical cameras represent a pixel value). If eight levels of each of the primary colors is sufficient to represent the pixel color, then three bits is required to represent each color for a total of nine bits. The total number of bits required to represent one video frame is simply 307,200 x 9 or 2,764,800. Since the frame rate is 60 frames per second, the data rate is 60 x 2,764,800 or 165,888,000 bits per second. Based on the calculated sample rate of nearly 166 Mega Bits per Second it would be impossible to transmit the video over the typical 100 Mega Bit per Second home Local Area Network. Fortunately, there is always significantly less information in a known video signal than in the number of samples required to represent an unknown video image. Let's assume for the moment that the camera is viewing a blank wall that is all one color and remains unchanged for all time. For the blank wall case all pixels are always the same color so the information in the scene is just the color of one pixel. The information required to represent the color of the pixel is 9 bits and since all pixels are the same, only 9 bits of information are required to represent the entire video image. Of course actual video cameras capture images that normally change with time and while many of the pixels in typical video frames are the same, some percentage will contain different color values. As a result the typical amount of information in a video frame is much less than the sample rate of 166 Mega Bits per Second would suggest. Typical video cameras utilize compression algorithms that attempt to reduce the sampled video data to the smallest amount of data needed to represent the scene within the frame and the changes between each frame. If a perfect compression algorithm existed, it would reduce the sampled video data to the actual information present in the data. In practice it has been found that we really don't even need all of the information present in the video to produce an acceptable video representation of the original image. The reason that some information can be discarded has to do with the way humans see changing images. Most video compression algorithms used today make use of the fact that some of the information in the video scene can be discarded to achieve a far greater reduction in the amount of information needed to represent the video scene. Algorithms that can eliminate data samples without any loss in the underlying video information are called lossless while algorithms that eliminate some of the underlying video information are called lossy algorithms. Virtually all video compression that is used for consumer grade equipment today is based on lossy algorithms such as MPEG-2 or MPEG-4. Typical lossy MPEG compression reduces the sampled data rate to a compressed rate of from 0.8 to 8 Mega Bits per second. Things like movement in the scene and scene contrast result in the order of magnitude variation in required information rate. When considering the RF communication channel information capacity it is important to always important to first consider the information rate required to represent the sampled data.