Image

Eight Errors Common To Spectrum Analysis

June 12, 2014
When performing RF/microwave spectrum-analyzer measurements, be sure to avoid these eight common mistakes.

Download this article in .PDF format
This file type includes high resolution graphics and schematics when applicable.

Spectrum analysis is essential for understanding the frequency-domain characteristics of components, circuits, and systems, but these instruments and their measurements are not foolproof. In fact, eight common mistakes plague the accuracy and effectiveness of spectrum-analyzer measurements—errors that can lead to improperly adjusting a device under test (DUT) or shipping a device to a customer that has not met its required specifications. Luckily, some simple guidelines can be followed to ensure that the spectrum analyzer is being used properly and performing to expectations.

Many of the mistakes made when using a spectrum analyzer have to do with using the wrong equipment, or else using the analyzer’s controls incorrectly. The eight common errors mentioned above are as follows:

  1. Using the wrong detector.
  2. Using the wrong averaging type.
  3. Measuring the analyzer’s own internally generated distortion products.
  4. Incorrect mixer level for EVM measurements.
  5. Not using single sweep when remotely controlling the analyzer.
  6. Not synchronizing measurements with *OPC.
  7. Turning the display off and using binary data types when transferring data for speed.
  8. Feeding too much power to the input of the spectrum analyzer.

Such errors are quite innocent and easy to make. The first one (using the wrong detector) can lead to wrong results simply by not matching the detector to the needs of the measurement. Modern spectrum analyzers operate with a variety of different detectors, for different signal types—including peak, sample, average, and normal type detectors. Using the wrong detector type can produce incorrect results, potentially leading to incorrectly adjusting a device under test (DUT) or missing a present-but-undetected signal.

Picking A Detector

Selecting the proper detector for a spectrum analyzer is a simple-enough task when some general rules are followed. A sample detector, for example, provides a single sample for each trace point on the analyzer display. If the display is set for 1001 trace points (#Pnts), each trace point will represent a single sample evenly spaced across the span of the instrument in the frequency domain. The interval in frequency bandwidth between trace points will be given by the frequency span divided by the number of trace points, or SPAN/(#Pnts-1). A sample detector is effective for measuring noiselike signals.

1. The yellow trace uses sample detection in a wide span and a narrow RBW, causing signals to be missed that are detected with the blue trace using peak detection.

When measuring continuous-wave (CW) signals, however, the analyzer’s resolution bandwidth (RBW) must be set wider than the trace interval. If the RBW is too narrow, a CW signal amplitude measured with a sample detector may appear too low or be missed altogether (Fig. 1). Most spectrum analyzers will automatically select the sample detector when trace averaging is applied, so it is possible to unknowingly be using the sample detector while measuring CW signals.

In contrast, a peak detector maintains the highest amplitude value in each measurement interval and displays this value in the trace point. A peak detector is effective for measuring CW signals, but can provide incorrect levels when measuring noiselike signals, unless it is a “max hold” type measurement where the analyzer is being used to read worst-case maximum power.

An average detector averages the power between two trace points and displays the mean power that has been averaged on a linear scale, such as in milliwatts (mW). Such a detector is well suited for noiselike signals, but is also effective for correctly showing the amplitude of a CW signal, provided that the RBW is at least as wide as the trace interval. As with the sample detector, an average detector can show too low a reading for the amplitude of a CW signal if the RBW is set too narrow.

A normal detector, in most cases, is the default detector for a spectrum analyzer. A normal detector always shows the correct amplitude for a CW signal regardless of the RBW selected relative to the trace interval. It is also effective when measuring noiselike signals. It does this by displaying the peak value of a signal that rises and falls in level during an odd trace point and shows the minimum value of the signal during the even trace point. This causes the peak-to-peak value of a noiselike signal to be accurately represented on the analyzer’s display.

For trace intervals where a signal only rises or falls, the peak value will be displayed. This occurs when a CW signal is swept through the trace and the amplitude is retained. A normal detector should not be used when integrating noise power—such as for channel-power or adjacent-channel-power measurements—since the alternating peaks and minimums will improperly represent the distribution of power in a noiselike signal.

In general, unless there is certainty about the type of detector to use for a particular measurement, it is best to use the default detector selected by the spectrum analyzer. And if there is some uncertainty, the peak detector can be used for measuring CW signals and the average detector selected for noiselike signals.

Download this article in .PDF format
This file type includes high resolution graphics and schematics when applicable.

Averaging Out

Download this article in .PDF format
This file type includes high resolution graphics and schematics when applicable.

The second common mistake connected with a spectrum analyzer is using the wrong averaging type. Most spectrum analyzers offer a choice of log-video or power (RMS) display averaging type. Log-video averaging implies that averaging will be performed on a logarithmic scale. This will cause a noiselike signal, such as the noise floor of the analyzer or a wideband code-division-multiple-access (WCDMA) signal, to be measured as much as 2.51 dB below the actual level of the signal. But log-video averaging will not affect measurements and display of a CW signal. For this reason, using log-video averaging on a CW signal that is close to the noise floor of the spectrum analyzer can be beneficial. Log-video averaging will reduce the noise floor and improve the instrument’s signal-to-noise-ratio (SNR) performance (Fig. 2).

2. The WCDMA signal is averaged in the yellow trace using log-video averaging, resulting in a -2.5-dB error compared to the same signal correctly averaged using power (RMS) averaging in the blue trace.

In most cases, when measuring noiselike signals, power (RMS) averaging should be used if averaging is being applied. Averaging could be simply trace averaging, or else averaging caused by reducing the analyzer’s video bandwidth (VBW) to less than the RBW. In general, log-video averaging is best suited for CW signals and power (RMS) averaging for noiselike signals.

The third mistake for spectrum analyzer users is to measure the distortion products generated by the instrument rather than of the DUT. The distortion products of interest for a DUT might be due to third-order intercept (TOI), adjacent-channel power (ACP), or harmonic signals. The relative amplitude level of these distortion products is normally related to the level of the input signal being fed to the DUT.

Unfortunately, a spectrum analyzer may also be capable of generating distortion products when handling an input signal with sufficient power. In such a case, it is possible for the analyzer’s internal distortion products to constructively or destructively sum with the distortion products from the DUT, causing incorrect results.

Internal generated distortion products are a function of the mixer level in the spectrum analyzer. The level of test signals to the mixer can be reduced by increasing internal or external attenuation. Attenuation should be increased to the point where the relative level of the distortion product no longer changes. This attenuator setting will ensure that distortion measurements are being performed on the DUT alone, and not the combination of the DUT and analyzer.

The analyzer’s mixer also plays a part in the fourth common mistake for spectrum analyzer measurements: using an incorrect mixer level when performing EVM measurements. Such EVM measurements are achieved by using the VSA capabilities within a spectrum analyzer. In this mode, a signal under test is downconverted directly to the analog-to-digital converter (ADC) in the signal analyzer.

In most cases, the appropriate bandwidth is selected. But in some cases, the measurement may not be optimized in the signal analyzer. A mixer level that is too low or too high can degrade the performance of the measurement.

To optimize an EVM measurement with a spectrum analyzer, the input attenuation should be reduced until an ADC overload condition is met; the attenuation is then increased until the overload condition is resolved. At this level of attenuation, the full range of the ADC is being effectively used. Reaching the optimum level may require turning on preamplifiers or adding additional gain to the system for low-level signals.

Another common analyzer mistake is not using single-sweep mode when the instrument is under remote control. It seems intuitive that a measurement that is continuously sweeping must run faster. But under remote control, a spectrum analyzer will actually run slower in continuous-sweep rather than in single-sweep mode. When an INITIATE command is sent, the instrument must abort the current sweep mode and then reinitiate the current request measurement. In many cases, it may be desirable to have the instrument in single sweep and initiate any measurements to maintain speed and synchronization.

The sixth common analyzer mistake is not synchronizing measurements with the “operation complete” flag (*OPC). Automating signal analysis measurements can be confusing and, at times, incorrect results can occur. Some operators may add a “sleep” statement to delay their programming code to reduce the frequency of the error or resolve it altogether. But the error may be the result of a synchronization error, and synchronization can be maintained by using the operation complete flag that indicates when a measurement or sweep is complete.

Programmatically, the code should be:

INIT:CONT OFF set the measurement to single sweep

INITIATE:initiate the measurement

*OPC?Request a 1 returned after the measurement completes

Read the 1 that is returned when the sweep or measurement is complete

FETCH:?Fetch the measurement results or place a marker on the trace

The seventh common mistake is turning the display off and using binary data types when trying to gain speed when transferring data. In almost all cases, when attempting to maximize the throughput of a test, the display will be turned off and binary data used to reduce the amount of data that is transferred. The following commands can improve throughput significantly:

INIT:CONT OFFset the measurement to single sweep

FORMAT:DATA REAL,32set the data results to binary block real 32 data

DISPLAY:ENABLE OFFturn off the display

The last of the eight errors (feeding too much power into the input port) can be the most expensive of them all. The damage level of most spectrum analyzers is about 1 W or +30 dBm. Those who have plugged a signal into the input port of their spectrum analyzer—only to watch spurious signals dance across the screen and then the screen goes blank—can get a sick feeling when they realize they have channeled 5 W into a relatively new instrument and overloaded its front-end electronics. When working with signals that are known to be greater than the rated damage level of the instrument, using limiters at the input of the instruments can save a lot of time and money in the long run.

Bob Nelson, Product Support Engineer

Agilent Technologies, Inc., 1400 Fountaingrove Pkwy., Santa Rosa, CA 95403; (707) 577-2663, (877) 424-4536

Download this article in .PDF format
This file type includes high resolution graphics and schematics when applicable.
About the Author

Bob Nelson | MXA (N9020A) Product Support Engineer

Bob Nelson is Agilent Technologies’ MXA (N9020A) Product Support Engineer. He has spent the last 14 years with the company, supporting the Agilent field organization and customers with application-focused measurement requirements. Nelson holds a degree in Electrical and Electronic Engineering from California State University, Chico.

Sponsored Recommendations

Free Poster: Power Electronics Design and Testing

Dec. 23, 2024
Get with this poster a guidance about usual converter types, most important measurements and general requirements for power electronics design and testing. Register for a download...

eGuide: Optimizing and Testing RF Power Amplifier Designs

Dec. 23, 2024
This eGuide explores electronic design automation to real RF devices, focusing on verification, characterization, repeatability, and throughput, while highlighting key steps from...

Free Poster: Wireless Communications Standards

Dec. 23, 2024
Get insights about the latest cellular, non-cellular, IoT and GNSS specifications including 5G, LTE and Wi-Fi. Sign up to receive your poster in the mail or via download.

5G NR Testing – Are You Ready for the 5G Challenges?

Dec. 23, 2024
5G NR deployment is accelerating, unlocking new use cases, applications, and opportunities for devices and infrastructure. The question is: are you prepared for these advancements...