Achieving Amplitude Accuracy In Modern Spectrum Analyzers
Spectrum analyzers are among the most versatile of RF/microwave measurement tools, with signal power among the most common measurement made with the instruments. Traditionally, the combination of a power meter and power sensor has been the measurement tool of choice for its well-characterized traceability path back to reference standards at national standards laboratories. But modern spectrum analyzers have made dramatic improvements in amplitude accuracy, with levels approaching (but not exceeding) the power meter and sensor. Understanding the error terms associated with a spectrum analyzer's relative and absolute amplitude accuracies can help an engineer interpret the analyzer's specifications when selecting a measurement tool and balancing price/performance tradeoffs.
The earliest spectrum analyzers were fully analog, and even an operator's skill in reading and recording the measurement results impacted amplitude accuracy. The first digitaldisplay spectrum analyzers were introduced in the 1970s, with the HP 8566A and HP 8568A models from Hewlett-Packard Co. (now Agilent Technologies) among the most accurate and popular instruments of that time. They featured digital displays and digital marker readouts for improved accuracy.
The signal at the end of the analog chain was the filtered (and unipolar) result of detecting the intermediatefrequency (IF) signal level and therefore directly proportional to the amplitude of the signal in the selected resolution bandwidth (Fig. 1). It was called the "video" signal because it had driven the Y-axis video deflection plates in previous all-analog spectrum analyzers. As the block diagram shows, the analog signal was processed by an analog-to-digital converter (ADC) for storage and display without using cathode-raytube (CRT) persistence, allowing for communication with remote users. Because of this architecture, and the fact that a user no longer needed to interpret the results on the display, improved accuracy was possible compared to the all-analog spectrumanalyzer architecture.
Unfortunately, the improved architecture still suffered from gain (amplitude) drift in the IF circuitry. The frequency stability was relatively low as well, with some drift in the center frequency and filter bandwidths. The logarithmic amplifier that allowed decibel-scaled displays suffered considerable errors as well.
When the HP 8560A spectrum analyzer was introduced in 1989, it marked the first general-purpose swept-frequency spectrum analyzer where the ADC moved forward in the signal processing chain, to digitize the IF signal rather than the detected magnitude (the video signal). In this instrument, filtering, detection, and logarithmic conversion were performed digitally, but only in the narrowest resolution bandwidths (1 Hz through 300 Hz), and only with Fast- Fourier-Transform (FFT) processing.
Advances in ADC and signal-processing technology during the 1990s eventually brought an all-digital IF structure to some swept spectrum analyzers, beginning with those at the high end of price and performance. For example, the Agilent PSA series spectrum analyzers, introduced in late 2000, included digital processing for all resolution bandwidths. Digital signal processing (DSP) provided 160 choices for resolution bandwidth (1 Hz through 8 MHz) in swept and FFT analysis modes. These digital advances have recently been provided more economically in analyzers such as the Agilent X-series (MXA in 2006 and EXA in 2007).
Although the consistency of alldigital processing has improved IF specifications an order of magnitude compared to the previous generation of analog instruments, the RF signal path in a spectrum analyzer still has gains that drift with time and temperature. Fortunately, the effects can be minimized, and the amplitude accuracy further improved with an internal 50-MHz amplitude reference.
The primary driver of improvement in spectrum-analyzer accuracy has been the all-digital IF. But another driver is background alignments. The RF and especially IF analog-signalprocessing elements can be characterized with reference signals regularly, such as during retrace (the time the LO resettles after a sweep in preparation for the next sweep).
There are two kinds of amplitude accuracy: absolute and relative. The difference between the two seems confusing, given their names, because absolute accuracy is also relative. Absolute accuracy is the accuracy relative to a standard kept by a national standards laboratory, such as the National Institute of Standards and Technology (NIST), formerly the National Bureau of Standards (NBS). Relative accuracy is the accuracy of the ratio of two measurements, irrespective of individual accuracy traceability to NIST.
Absolute accuracy is useful when measuring devices against established requirements. For example, an RF power amplifier might be specified for a set of error requirements when delivering a certain amount of absolute power. Relative accuracy is often an excellent substitute for absolute accuracy. For example, in a test system with cables and switches, the absolute accuracy at a system port must be characterized due to the losses in the system, making the absolute accuracy of the spectrum analyzer itself moot, and the stability of the analyzer response the essential characteristic.
The accuracy of a spectrum analyzer is at its best when the signal being measured is at the same level and frequency as the analyzer's built-in amplitude reference oscillator, which is often called the calibrator. The accuracy can be optimized by performing both sets of measurements under the same reference conditionsi.e., the same instrument settings .
For example, when new measurements are made with a different signal level, the change in response relative to the reference condition is called the "scale fidelity" error. Other settings that can change are the RBW (with errors referred to as "RBW switching uncertainty"), input attenuation (attenuator switching uncertainty), reference level (reference-level accuracy or IF gain uncertainty), and display scale (display-scale switching uncertainty).
Reference-level and display-scale uncertainties can be rendered zero with an all-digital IF. Another feature of some all-digital IF spectrum analyzer designs is that the scale fidelity can be made dependent on the level at the input mixer (input power minus attenuation) and independent of the reference level. This allows accuracy to be independent of display conveniences and user preferences.
One challenge in achieving excellent absolute amplitude accuracy lies in tracing the results back to a reference standard, such as that maintained by NIST. That traceability is closely tied to the capabilities of RF/ microwave power meters and power sensors. Understanding the traceability of an RF power meter and its sensors will help clarify the limitations of spectrum analyzers when striving for absolute amplitude accuracy.
Production environments may typically contain a number of RF/ microwave power meters and sensors that are used for power measurements and are regularly calibrated. In such an environment, calibration consists of comparing the results of the power sensor to another measuring device of known accuracy. This other device might be in the production facility's metrology laboratory or calibration lab, or calibrations might be performed by another organization. The power sensor is calibrated against a reference standard or "transfer standard," which itself is calibrated against a NIST standard, which is a primary standard. Each step away from the NIST standard usually involves less time-consuming (and less expensive) calibration practices; the tradeoff of each extra step is adding uncertainty. In contrast, regularly calibrating a spectrum analyzer, with its higher capital value, weight, and size, is impractical. As a result, spectrum analyzers are typically calibrated against power meters and sensors, although this adds one level of removal to the uncertainty of the traceability.
Continue to page 2
Page Title
When a power meter/sensor combination is used in the field, it should be regularly calibrated, such as by using the meter's built-in reference amplitude calibrator. This reference source is a 50-MHz oscillator with 0-dBm output level and excellent frequency and amplitude stability, typically less than 0.04 dB drift at the temperature extremes of 0 and +55C.
A spectrum analyzer also contains a reference oscillator, and this is the best-calibrated part of the instrument. In the instrument's production process, the amplitude of the spectrum analyzer's local reference is adjusted to best match its specified nominal value. In the PSA, MXA, and EXA spectrum analyzers from Agilent Technologies, for example, the reference calibrators are set at 50 MHz and -25 dBm.
Although the 50-MHz reference frequency is not widely used for spectrum analyzers, it has advantages in terms of accuracy. For example, the oscillator can be ultimately compared against another 50-MHz reference using a power meter and sensor, which is also based on a 50-MHz reference. With matching frequencies, the uncertaintyversus- frequency of the sensor does not apply, for improved accuracy.
The choice of spectrum analyzer reference amplitude at -25 dBm, while it does not match the 0 dBm of the power meter's calibrator, is better suited to the superheterodyne mixing scheme of the spectrum analyzer, where the maximum acceptable level to the mixer (the input level minus input attenuation) to minimize spurious generation is typically -10 dBm. A spectrum analyzer's RF and IF circuits will tend to exhibit some small (about 0.01 to 0.06 dB) amounts of compression at this level, so it is desirable to maintain the level to the mixers well below -10 dBm. The reference setting of the input attenuator is 10 dB, so the ideal calibrator level is well below 0 dBm. The ideal calibrator level should be high enough to give excellent signal-to-noise ratio in the reference condition: -25 dBm fits both requirements.
Some spectrum analyzers provide a front-panel signal output labeled "calibrator" while the calibration signal remains internal in some instruments. The convenience of having an internal calibration source is considerable, since a user need not disconnect a signal under test to calibrate the analyzer. The analyzer can even use the calibrator as a part of its own internal alignments. Also, the uncertainty in the connector and cable loss does not contribute to the analyzer uncertainty. But there are also disadvantages to an internal calibrator: It must be adjusted using RF substitution methods instead of direct measurement. Additionally, the repeatability of the switch used to control it adds uncertainty to the calibration process. Fortunately, the switch uses the same technology as the many switches used in every setting of the attenuator, so this disadvantage is small compared to other uncertainties.
The high-frequency industry has not as of yet agreed upon a definition of absolute amplitude accuracy for an RF/microwave spectrum analyzer. At the very least, the accuracy of the RF calibrator can be equated to the absolute amplitude accuracy of the spectrum analyzer. A more inclusive definition might include the accuracy of the measurement of a single level at a single frequency. The most inclusive (and therefore more widely applicable) definitions include a range of signal levels, a range of signal frequencies, and a range of measurement settings.
Two terms that are substantially independent of each otheraccuracy at the reference (calibrator) frequency (which, for convenience, will be called AbsAmp@50 after the 50-MHz calibrator) and the RF flatness relative to that frequencytend to contribute to spectrum analyzer measurement error to approximately equal extents. The accuracy at the calibrator frequency includes a number of terms already mentioned, such as scale fidelity, RBW switching uncertainty, reference-level accuracy, and display-scale switching uncertainty. This part of the accuracy also contains the effects of calibrator accuracy, including aging, and the accuracy with which the spectrum analyzer aligns its gain to the calibrator level. In addition, the effects of the following will impact the analyzer's accuracy: variations in accuracy with environmental conditions (such as ambient temperature) and the uncertainty of the equipment used to verify the spectrum analyzer's specifications.
All these effects are combined in two ways: for a warranted specification, and for a statistical specification. In the case of a warranted specification, it should be noted that for most cases, in the highest-performance spectrum analyzers with all-digital IFs, the errors are small and occur randomly. It would be expected that those errors combine to add to less than their worst-case total. The author has tested for AbsAmp@50 using a quasi-random assortment of 44 test conditions. These test conditions include a variety of signal levels, RBWs, reference levels, display scales, and also spans and FFT versus swept choices. All of these measurement points are tested against a "test line limit" (TLL). The TLL is computed from the warranted specification by subtracting the "delta environmental" and "measurement uncertainty" effects.
The delta environmental is determined by observing the change in performance of a small number of pilot instruments across the specified temperature and humidity range. When AbsAmp@50 is specified over a narrower range, the delta environmental guardband is computed assuming noworse- than-linear variations with temperature. The measurement uncertainty is the computed uncertainty of the external equipment (power meters, power sensors, bridges, VSWR effects) in the test. By industry practice and ISO standards, this computation is performed using a 95- percent confidence interval.
The other large contributor to spectrum analyzer error is RF flatness. RF flatness relative to 50 MHz is set through the process of adjusting and verifying spectrum analyzer performance versus RF/microwave power meters and sensors. The analyzer's response is measured for a signal at 50 MHz and then for the uncorrected response at a frequency to be adjusted. The response ratio is then compared to that observed with a calibrated power meter and sensor combination. The response ratio is stored in the analyzer's memory and applied to all measured results, with interpolations made at frequencies between the adjustment points. This is followed by a verification process where the results of the amplitude-corrected spectrum analyzer are compared to results from a different power meter and sensor at another test station. The verification frequencies are chosen between, rather than the same as, the adjustment frequencies. The verification results are tested against the test line limit. As in the AbsAmp@50 case, a guardband is added for delta environmental effects and measurement uncertainty. Worstcase absolute amplitude accuracy is the sum of AbsAmp@50 and flatness relative to 50 MHz.
Modern spectrum analyzers often provide frequency coverage into the high microwave region, with multiple LOs and harmonic mixing used to extend the frequency range (Fig. 2). The lowest of these bands is Band 0 (often informally called "low band") and includes frequency upconversion as the first mixing operation. The highest accuracy occurs in low band, because frequency upconversion allows image rejection without an additional yttrium-indium-garnet (YIG) bandpass filter to reject unwanted signal components. These YIG filters, which are commonly used at highband frequencies, introduce instabilities that increase absolute amplitude uncertainties to levels well above the capabilities of a power meter and sensor. As a result, this report will focus on spectrum-analyzer low-band amplitude accuracy.
There are good reasons for statistical specifications in place of warranted specifications. A statistical specification, such as the 95-percent interval with 95-percent confidence, is the standard for ISO-compliant manufacturing processes. Such specifications are tighter (the error bands are narrower) than warranted specifications. They are good for comparing different instruments from a single manufacturer and could be good for such comparisons between different manufacturers. They represent the performance of the instrument without distraction from the tradeoffs between yield and specification tightness. One disadvantage is that no individual instrument is warranted by the manufacturer to be within such a specification; in fact, almost 5 percent of all instruments produced are expected to be outside the specification.
In this report, a 95/95 specification refers to 95-percent coverage (95 percent of all produced instruments are within the interval) with 95-percent confidence. If the sample size used is infinitely large, 95 percent of the devices will fit within an interval that is 1.96 times as wide as the standard deviation. If the number of instruments used to set the 95/95 specification is smaller, the estimation of the standard deviation is itself subject to uncertainty. The multiplier of the estimated standard deviation (called the K factor) must be increased beyond 1.96 to be assured with 95-percent confidence that the resulting statement of 95-percent coverage is accurate. Thus, terms like "95-percent interval" and "95-percent confidence" are often used for this specification.
For absolute accuracy, the 95/95 specification is computed with a combination of straight additions and root-sum-square (RSS) computations of the many error contributors, as follows. AbsAmp@50 measurements have a Gaussian distribution. They are independent of measurements of RF flatness relative to 50 MHz. Therefore, it is possible to evaluate the standard deviations of each collection and RSS them to find the standard deviation of the combination.
The effect of measurement uncertainty on the 95/95 specification deserves explanation. As discussed earlier, measurement uncertainty is added directly to observed performance in verifying conformance with warranted specifications. Consider two cases in deciding how to treat measurement uncertainty in 95/95 specifications. In the first case, such as scale fidelity, the specification is a measure of performance that is not adjusted in response to testing in any way. The measurement uncertainty in such a case actually acts to spread the observed results. Any statement of the 95th percentile performance in such a case is already conservative because of the measurement uncertainty; it should not be further combined with the observed performance.
The other case can be exemplified by the accuracy of the setting of the RF calibrator. A spectrum analyzer production line may be able to achieve a small spread on the observed level of the calibrator when it is verified with the same device used to adjust it. In such a case, the measurement uncertainty should be RSSed with the observed data spread to accurately estimate the statistical distribution. The measurement uncertainty in this case could even be called the calibration uncertainty. This second case describes the observation of AbsAmp@50 quite well, so that measurement uncertainty should be combined.
The RF flatness relative to 50 MHz acts like a combination of the first and second cases. Because the flatness is adjusted on one test station and verified on another imparts the measurement uncertainty of the second station on the spread of the data. But some of the measurement uncertaintythat part that represents the traceability of the power sensors to NIST through a calibration laboratoryis common to both stations and is not seen in the spread of the data. As a conservative assumption, all the computed measurement uncertainty is modeled as though it is "traceability uncertainty." It is then RSSed with the other contributors (Fig. 3).
Figure 4(a) shows the statistical combination of AbsAmp@ 50 with frequency response relative to 50 MHz. So far, it has been found that by multiplying an RSS combination of error contributors by a K factor of about 1.98, it is possible to transform a standard deviation to a 95-percent interval with 95-percent (95/95) confidence expression. The 95-percent measurement uncertainty for the AbsAmp@50 verification can be combined with the 95-percent measurement uncertainty for the RF flatness verification and with the 95/95 value by means of a RSS computation to achieve an intermediate uncertainty result. It should be noted that the distribution of errors is not zero mean. The mean errors found in AbsAmp@50 and from the RF flatness are added, with the absolute value of this sum added to the intermediate result. Figure 4B shows the combination of observed performance with calibration uncertainties.
The delta environment term describes how the performance changes over a "laboratory environment" of 5C. It cannot be assumed that the ambient temperature is a Gaussian random variable. As a result, it is prudent to be conservative and directly add the absolute value of the delta environment to the intermediate result, making the 95/95 statement equally applicable at any ambient temperature in the range of +20 to +30C.
Statistical accuracy (the 95/95 statement of absolute amplitude accuracy) can be improved by using more calibration points across the RF region. Adding more points increases the manufacturing cost by increasing test times, however. For spectrum analyzers produced by Agilent Technologies, the statistical and cost factors have yielded the relationships shown in the table and in Fig. 5.
Amplitude accuracy in spectrum analyzers has progressed to the point where it is almost as good as in power meters, allowing this single tool to be a more complete test solution and providing power measurements that are both highly accurate and frequency-selective.
REFERENCE
1. Sherry L. Read and Timothy R. C. Read, "Statistical Issues in Setting Product Specifications...a primer on the use of statistics in specification setting," HP Journal, June, 1988, pp. 6-11. Available on-line at http://www. hpl.hp.com/hpjournal/pdfs/IssuePDFs/1988-06.pdf.