Mwrf 323 Screen01 1
Mwrf 323 Screen01 1
Mwrf 323 Screen01 1
Mwrf 323 Screen01 1
Mwrf 323 Screen01 1

Properly Understanding Noise In Test Applications

June 16, 2006
The output of a noise generator is usually assumed to follow a true Gaussian distribution but, even when not truly Gaussian, noise can make an excellent signal for communications tests.

Noise generators can serve as useful test tools in evaluating communications systems performance. They allow an operator to add a controlled amount of thermal noise to a reference signals to determine the effect of noise on system performance, such as bit-error rate (BER). Thermal noise conforms to a Gaussian probability density function (PDF) allowing a smooth transition from theoretical analysis to a test bench. For the most part, the output of a noise generator is approximately "close enough" to true (mathematically true) Gaussian noise to be useful for analysis and testing. What follows is an explanation of how to work with Gaussian noise in test applications and how to gauge the impact on the test when nonperfect Gaussian noise is used.

The signal energy in a system relative to its noise, usually given as Eb/No (or in its other guises C/N, C/No, SNR) is an expression of a signal's strength relative to the noise surrounding it, and an important figure of merit in testing communications channels. The use of additive white Gaussian noise in generating this ratio is well established, with techniques being called out in major standards governing communications practices (e.g., MIL-188-165a and ATSC A80).

White noise is suitable for testing because it represents equal energy at all frequencies in a spectrum. It is Gaussian because randomness in nature exhibits a Gaussian or normal distribution. Most of the noise in communications channels (such as noise introduced by an amplifier) has a thermal characteristic, and so tends toward a Gaussian distribution. Furthermore, the Central Limit Theorem shows that if enough random distributions are taken together, regardless of their individual distributions (flat, Gaussian, or otherwise), the limit as the sum tends to infinity is that of a Gaussian distribution.

Mathematically, a Gaussian distribution is expressed as:

This gives the distribution of a variate x, with mean and variance Σ2. Mathematicians and statisticians tend to call this the normal distribution; psychologists refer to it as the bell curve; and physicists and engineers talk about the Gaussian distribution. Gaussian noise then is a fluctuation with this mathematical description about a mean (Fig. 1).

Noise can be used for testing system performance in several ways. One way is to add noise to a channel and increase the noise level until an unacceptable degradation in the signal quality is reached. For example, a television picture shows increased "snow" as noise is added to the signal. The amount of noise that causes the degradation is a measure of the channels signal strength or effectiveness of its signal processing.

What if a more quantitative measure is desired? One method involves the capability of the system to resolve two signals with and without noise. Without noise, the signals are distinct and easy to resolve (Fig. 2), for example as the digital bit 0 for a signal with voltage V0 and digital bit1 for a signal with voltage V1. In reality, some noise will be added to these signals as part of an electronic system, random fluctuations about the mean V1 or V0 that follow a Gaussian distribution as given by Eq. 1.

The ability to differentiate the two signals will not a problem if they are sufficiently far apart not to overlap. Yet, with a Gaussian distribution there will always be some measure of an overlap (Fig. 3) . So how is it possible to tell one signal from another?

This can be done by setting a threshold value at a midpoint between the two mean signal levels. This threshold takes the voltage value of (V1 V0)/2 below V1 (or above V0). Voltages detected above the threshold produce a digital value of 1 while voltages below it are said to produce 0. What happens to a 0 when the noise is sufficient to raise the signal level over the threshold? Given the decision-making algorithm, the 0 will be mistaken for a 1, resulting in a bit error.

Some amount of error is inevitable, so some measure of the bit errors is necessary to determine the severity of the problem. It is possible to look for the probability that, while a 0 is being transmitted the noise increases the signal level above the threshold, and that while a 1 is being transmitted the noise decreases the level below the threshold. Bayes Theorem then leads to:

Page Title

which can be taken as the probability of error is equal to the sum of the product of the probability of a 0 type error with the probability of a 0 and the product of the probability of a 1 type error with the probability of a 1. In the simple example of the two voltages, values of 1 and 0 are equally likely (one-half the time 1 and one-half the time 0), so Eq. 2 can be rewritten as:

The probability of a 0 type error is given by:

where:
n = the amount of voltage noise added to the signal.

The probability of a 1 type error is:

With a Gaussian noise distribution, these probabilities are equal since the distribution is symmetric about the mean value, leading to:

The probability of a bit error in the example system is equal to the probability that the noise in the system exceeds the threshold value. The statistics of a Gaussian distribution reveal that the probability that a variate x exceeds a given value a is given by:

where:
erf
= the error function,
erfc
= the complementary error function, and
erfc(x) is given by 1-erf(x).

The error function erf can be applied to a wide range of analysis, including the solution to the differential equations describing the diffusion of impurities in a semiconductor. It has no analytical solution, but can be approximated via Maclaurin series. As a consequence, the vales for erf(x) are well tabulated in various textbooks. MicrosoftExceleven plots erfc as part of its analysis toolpak.

In the example system, the threshold value is (V1 V0)/2 and the distribution of the voltage noise is such that its mean value is 0 and its variance σ2 is given by Vn2 , where Vn is the RMS noise voltage. This makes it possible to write:

Page Title

To find an expression in terms of Eb/No, which is a ratio of energies, the square of the voltages for Eq. 8 yields:

which can be re-written in terms of energy as:

where:

No = the noise density.
Since the energy per bit Eb is the average of the energy of the two signals, it is possible to write the expression:

This gives the well-known formula for probability of error in a binary-phase-shift-keying (BPSK) communications channel. Similar analysis provides the same result for quadrature-phase-shift-keying (QPSK) and orthogonal-QPSK (OQPSK) channels, and variations of this formula can be applied to other modulation schemes. Derivation of the error rate for these schemes is beyond the scope of this article, but the point is well illustrated that the origins of this formula (and the "waterfall curves" it is used to generate) lie in the properties of the Gaussian PDF.

To improve BER performance over Eb/No, systems employ various forward error correction (FEC) schemes (such a Viterbi or Reed Solomon coding). Derivation of these curves is beyond the scope of this article, but they are plotted in Fig. 4. One can see that the curves plotted become steeper as the codes performance improves (albeit at the expense of channel efficiency) and the gradients become such that a relatively small change in Eb/No dramatically affects the BER of the system. Modern coding techniques continue to approach the Shannon limitthe curves continue to get steeper and this effect becomes more and more apparent. Thus, the accuracy of the ratio that is generated becomes increasingly important as the complexity of coding schemes increases.

It has been established the result that the BER performance of a channel related to the probability distribution of the noise present in that system. The result seems somewhat obvious, but it is worth walking through the previous derivations in order to tackle the next point.

The "waterfall curves" are based on Gaussian PDF. A Gaussian PDF extends from -8 to +8. But in order to perform measurements using these curves, it is necessary to use a system that will give only an approximation of this distribution (no noise source is truly Gaussian by the Mathematicians definitionthis is evidenced by the lack of infinite amplitude noise spikes that one sees at the output). How does this approximation affect any BER results?

There are two approaches to evaluating this. In the first, or what can be termed Option A, imagine a noise source that nominally perfect but, under compression, noise spikes above a certain value are "clipped." The events still occur, but the values that these events lead to fall into a limited range. It is possible to visualize the effect by returning to the picture of the Gaussian PDF.

Two points of interest are marked: The threshold value above which errors occur, and then a cutoff point beyond which the events are clipped and pushed back into the region below. With compression occurring, the area above the cutoff point is 'cut and pasted' back onto the region below the cutoff. What effect does this have on the probability of error? The area under the curve beyond the threshold for error gives this probabilityand below the cutoff the area is added to by the "cut and paste" operation. Below the cutoff, the probability of error is the same as that given by Eq. 12. Above the cutoff, the probability of error is 0 (Fig. 5) . What does this mean for the BER curves?

Originally, a decision was being made concerning the probability that the noise n exceeds a threshold value a. The answer was:

Now, the concern is the probability that the noise n exceeds a threshold value a, given that beyond the cutoff point A nothing happens (Fig. 5, bottom) . For values of n below point A, the answer is still given by Eq. 12. However, for values of n beyond A, the probability of error is zero. Assume that the value of point A is given by so many "sigma," or:

where:

l= an integer and
Σ = the standard deviation of the noise distribution. Next, it is possible to write the inequality:

This gives us the requirement that under the conditions of "compression" the threshold value must remain below the cutoff point for any errors to occur. By comparing equations 11, 12 and 14, it is possible to rewrite this as the condition:

That is, for a compressed noise source to give errors, it will not be possible to test Eb/No ratios beyond an upper bound, that upper bound defined by "how many sigmas" determine the quality of the noise source. That is, for a source that is rated as 3Σ, the maximum testable ratio (in theory) is 9. Beyond that, no errors will occur.

The second evaluation approach, or Option B, is based on imagining that the events that would give peak noise beyond lΣ simply do not occur, that the noise source is incapable of generating fluctuations above a certain threshold value. In Fig. 6, instead of "cutting and pasting," it is simply a matter of " cutting" or throwing away the events above the point of interest. For values above the cutoff point, the probability of error is again zero. For values between the threshold and cutoff, the probability of error is some reduced form of that given in Eq. 12.

The probability of error is given by the area of the curve, so the reduced probability is given by the area under the curve above the threshold, minus the area under the curve above the cutoff. This can be expressed as:

Substituting the value for A in the equation above gives us:

The probability of error is attenuated by a small amount, the amount being the complementary error function of "how many sigma" defines the rating of the noise source. In terms of BER curves, this means two things. First, as with the previous example, for values of Eb/No equal to l2 or higher than this, nothing happens. No errors can be generated, as this is limited by the "noise resolution" of the system. Secondly, the probability of error for Eb/No below the cutoff l2 is shifted below the theoretical curve, and as the limit l2 is reached, this difference becomes more and more pronounced as the two curves separate. Systems limited in this manner provide fractionally less errors than theory below the cutoff and, as with the other mechanism, no errors above the cutoff are allowed.

It is interesting to note that, in most digital noise generation schemes, the equivalent of the compression in Option A ("cut and paste" the energy above a peak value) will not take placeso a typical digital noise source will see this shifting away from an ideal Gaussian distribution as described in Option B. An analog scheme will more likely be represented by the process in Option A, where a limit is reached in achievable "noise resolution," but the shift away from a Gaussian profile below this limit conceivably will not occur.

Figure 7 illustrates both of these mechanisms: it includes the standard BER curves for OQPSK (uncoded) and rates 1/2, 3/4, and 7/8 coding, along with recalculated curves that take into effect the deviation from Gaussian PDF when the noise is limited to 3Σ. The noise resolution is limited to an Eb/No ratio of 9 (or in log terms ~9.54 dB), and it can be seen that the adjusted curves peel away from the original curves before this limit is reached. Clearly, limiting noise to 3Σ limits system performance: for a generator to call itself "Gaussian" at least 4Σ is generally required with higher-performance ones being over 5Σ.

In an actual noise generator a combination of these two mechanisms takes place, impacting two key concepts:

  • Eb/No accuracyas curves become steeper, greater accuracy is called for. Here the second mechanism contributes heavily.
  • Noise resolution, which can be thought of as a system's ability to reach remote events that cause errors for large values of Eb/No (corresponding to large bit energy or low noise power).

No isolated noise generating system can ever reach true Gaussian behavior (in the mathematical sense)all systems are limited to being approximations. Where the standards that systems are being tested against are based on true Gaussian distributions, then an important concept to bear in mind when selecting a noise generator is how close the approximation can get.

In terms of testing modems for example, this concept places limits on the Eb/No range that can be used for testing, and in addition generates results that are better than theoryan effect that can cause considerable confusion. Careful consideration of the nature of the test, the accuracy required, and the limits that the noise generator can reach will make testing with noise a much less daunting venture.

Sponsored Recommendations

Frequency Modulation Fundamentals

March 14, 2024
The development of crystal-clear FM communications was an innovation of genius and toil. Utilized today in applications such as radar, seismology, telemetry and two-way radios...

44 GHz Programmable Signal Generator

March 14, 2024
The Mini-Circuits SSG-44G-RC is a 0.1 to 44 GHz signal source with an RF output range of -40 to +17 dBm with fine resolution. This model supports CW and pulsed (? 0.5 ?s) outputs...

Webinar: Introduction to OTA Measurement for mmWave and Sub-THz

Feb. 19, 2024
Join Jeanmarc Laurent, a leading expert from MilliBox, for an exclusive live webinar showcasing a complete Over-the-Air (OTA) testing system setup. In this immersive session, ...

Using a CMT VNA with Socket Server

Feb. 19, 2024
This application note describes use of a software application CMT Socket Server which is distributed and supported by Aphena Ltd. Please email [email protected] regarding purchase...