RF Power Calibration Aids Wireless Transmitters

Feb. 13, 2008
The use of precise RF power measurements and calibration techniques can save operating costs and improve the spectral performance of wireless communications transmitters.

Wireless transmitters can benefit from measurement and control of RF power. Because of such factors as regulatory requirements and the need to co-exist with other wireless networks, the RF power levels of wireless transmitter high-power amplifiers (HPAs) must be monitored and controlled. The precision and accuracy of these measurements can result in improved transmitter spectral performance as well as significant savings in operating cost for an HPA.

Some form of factory calibration of a PA's output power is usually performed as part of any scheme to regulate the PA's output power. Calibration algorithms vary vastly in terms of their complexity and effectiveness. This article will focus on how a typical RF power control scheme is implemented and will compare the effectiveness and efficiency of various factory calibration algorithms.

Figure 1 shows a block diagram of a typical wireless transmitter with RF power measurement and control functionality. A small portion of the signal from the HPA is coupled and fed to an RF detector for measurement. The coupler is located near the antenna, and after the duplexer and isolator; their associated losses must be included as part of the calibration.

Depending upon the coupling factor, the signal from the directional coupler will be proportionately lower (such as 20 or 30 dB lower) than the signal going to the antenna. Coupling power in this manner results in some power loss in the transmit path, usually a few tenths of a dB depending upon the quality of the directional coupler. In wireless infrastructure applications where maximum transmitted power typically ranges from +30 to +50 dBm (1 to 100 W), the signal coming from the directional coupler will still be too large for the RF detector. As a result, additional attenuation is generally required between the coupler and the RF detector.

Modern logarithmic-responding RF detectors (logamps) have a powerdetection range between 30 and 100 dB, and provide a temperature- and frequency-stable output. In most applications, the detector output is applied to an analog-to-digital converter (ADC) to be digitized. Using calibration coefficients stored in nonvolatile memory (EEPROM), the code from the ADC is converted into a transmitted power reading. This power reading is compared to a set-point power level. If any discrepancy between setpoint and the measured power is found, a power adjustment, which can be made at any one of a number of points in the signal chain, will take place. The amplitude of the baseband data driving the radio can be adjusted, a variable-gain amplifier (VGA) at RF or intermediate frequency (IF) can be adjusted, or the gain of the HPA can be changed. In this way, the gaincontrol loop regulates itself and maintains transmitted power within desired limits. It is important to note that the gain-control transfer functions of voltage-variable attenuators (VVAs) and HPAs are often quite nonlinear. As a result, the actual gain change resulting from a given gain adjustment is uncertain. This reinforces the need for a control loop that provides feedback on changes made and further guidance for subsequent iterations.

In the system of Fig. 1, almost none of the components provide very good absolute gain accuracy specifications. The impact of this can be shown by targeting a transmit power error of 1 dB. The absolute gain of devices such as HPAs, VVAs, RF gain blocks, and other components in the signal chain will generally vary from device to device to such an extent that the resulting output power uncertainty will be significantly greater than 1 dB. In addition, signal chain gain will vary further as the temperature and frequency change. As a result, it is necessary to continually measure the power being transmitted.

Output power calibration could be defined as the transfer of the precision of an external reference into the system being calibrated. A calibration procedure generally involves disconnecting the antenna and connecting an external measurement reference, such as an RF power meter, in order to transfer or clone the meter's accuracy to the transmitter's integrated power detector. The calibration procedure involves setting one or more power levels, taking the reading from the power meter and the voltage from the RF detector, and storing all of this information in nonvolatile EEPROM. Using this stored information, the transmitter can precisely regulate its own power without the power meter connected. As parameters such as amplifier gain vs. temperature, transmit frequency, and desired output power level change, the (calibrated) onboard RF detector will act like a built-in power meter with an absolute accuracy that will ensure that the transmitter is always emitting the desired power within a defined tolerance.

The transfer-function linearity and stability over temperature and frequency of the system's RF detector strongly influences the complexity of the calibration routine and the 7achievable post-calibration accuracy. Figure 2 shows the transfer function of an RF logamp with behavior versus temperature exaggerated for illustrative purposes. Three curves are shown: output voltage versus input power at +25C, +85C, and 40C. At +25C, the output voltage of the detector ranges from around 1.8 V for an input power of 60 dBm to 0.4 V for an input power of 0 dBm. The transfer function closely follows a straight line that has been laid over the trace. The transfer function deviates from this straight line at the extremities, and there are also instances of nonlinear behavior at power levels between 10 and 5 dBm.

A quick calculation suggests that this detector has a slope of approximately 25 mV/dB: a 1-dB change in input power will result in a 25-mV change in output voltage. This slope is constant over the linear portion of the dynamic range. So, notwithstanding the slight nonlinearity that was identified at around 10 dBm, it can be concluded that the behavior of the transfer function at +25C can be modeled using a simple equation in the form of Eq. 1:

VOUT = SLOPE (PIN INTERCEPT) (1)

where

INTERCEPT = the point at which the extrapolated straight-line fit crosses the x-axis of the plot.

From a calibration perspective, the simplicity of this equation is useful as it will allow the transfer function of the detector to be established by applying and measuring as few as two different power levels during the calibration procedure.

Consider the behavior of this imaginary detector in Fig. 2 over temperature. At an input power of 10 dBm, the output voltage changes by approximately 100 mV as the temperature shifts from about room temperature to either 40C or +85C. Based on the earlier calculation of the detector's slope as being 25 mV/ dB, this equates to a deviation in measured power of 4 dB, which is too much deviation for most practical systems (real-world RF detectors typically have temperature drift between 0 and +/-0.5 dB). In practice, what is needed is a detector with a transfer function having minimal drift versus temperature. This will ensure that a calibration procedure performed at ambient temperature will also be valid over a wide range of operating temperatures. This allows the transmitter to be factory calibrated at ambient temperature and avoiding expensive and time-consuming calibration cycles at hot and cold temperatures.

If the transmitter is frequency-agile and needs to transmit at multiple frequencies within a defined frequency band, the behavior of the detection as a function of frequency is also important. Ideally, the RF detector should exhibit a response that does not change significantly within a defined frequency band. This makes it possible to calibrate the transmitter at a single frequency and be comfortable that there will be little or no loss of accuracy as the frequency changes.

Continue to page 2

Page Title

Figure 3 shows the flow chart that would be used to calibrate a transmitter similar to that outlined in Fig. 1. This simple and quick two-point calibration allows power levels to be set approximately (but the levels must be measured precisely). Its effectiveness relies on the integrated RF detector being stable versus temperature and frequency, and having a predictable response that can be modeled using Eq. 1. The operating power range of the transmitter should also be compatible with the RF detector's linear operating range.

The calibration process begins by connecting a power meter to the antenna and setting a power level close to maximum. The power at the antenna connector is measured and sent to the transmitter's on-board microcontroller or digital signal processor (DSP). At the same time, the RF detector's output voltage is measured by an analog-to-digital converter (ADC) and its reading is provided to the transmitter's processor.

Next, the output power of the transmitter is reduced to a level that is close to minimum power and the procedure is repeated (measure power at antenna connector and sample RF detector ADC). With these four readings (low and high power level, low and high ADC code), the SLOPE and INTERCEPT can be calculated (see Fig. 3) and stored in nonvolatile memory.

Figure 4 shows the flow chart that would be used to precisely set power in a transmitter after calibration. In this example, the goal is to have a transmit power error which is less than or equal to 0.5 dB. Initially, an output power level is set based on a best first guess. Next, the detector ADC is sampled. The SLOPE and INTERCEPT values are retrieved from memory and the transmitted output power level is calculated. If the output power is not within 0.5 dB of the set power level, PSET, the output power is incremented or decremented by approximately 0.5 dB using a voltage variable attenuator (VVA). The term "approximately" is used here because it is likely that the VVA itself has a nonlinear transfer function. The transmitted power is again measured and further power increments are applied until the transmitted power error is less than 0.5 dB. Once the power level is within tolerance, it is continually monitored and adjusted if necessary (e.g., if a component in the signal chain has significant gain drift versus temperature).

Figures 5(a) through 5(d) show data from the same RF detector but use a different choice and number of calibration points. Figure 5(a) shows the detector transfer function at 2.2 GHz for the model AD8318, a wide-dynamic-range RF logarithmic detector that operates to 8 GHz. In this case, the detector has been calibrated using a two-point calibration (at 12 and 52 dBm). Once calibration is complete, the residual measurement error can be plotted. Note that the error is not zero. This is because the logamp does not perfectly follow the ideal output voltage (VOUT) versus input power (PIN) equation (VOUT = SLOPE(PIN - INTERCEPT), even within its operating region. The error at the calibration points will however be equal to zero by definition.

Figure 5(a) also includes error plots for the output voltage at -40 and +85C. These error plots are calculated using the +25C SLOPE and INTERCEPT calibration coefficients. Unless some kind of temperature- based calibration routine will be implemented, it will be necessary to rely on the +25C calibration coefficients and live with the slight residual temperature drift.

In many applications, it is desirable to have higher accuracy when the HPA is transmitting at maximum power. For one thing, there may be regulatory requirements that demand this higher level of accuracy at full or rated power. However, from a system design perspective, there is also value in increased accuracy at rated power. Consider a transmitter that is designed to transmit +45 dBm (approximately 30 W) output power. If a calibration can provide atbest 2 dB accuracy, then the HPA circuitry (power transistors and heat sinks) must be designed to safely transmit as much as +47 dBm or 50 W output power, an expensive overdesign. But if the system can be designed for post-calibration accuracy of 0.5 dB, the HPA must be only overdimensioned so that it can safely transmit 45.5 dBm or approximately 36 W.

By changing the points at which calibration is performed, it is possible, in some cases, to greatly influence the achievable accuracy. Figure 5(b) shows the same measured data as Fig. 5(a), but with very high accuracy (about 0.25 dB) from -10 to -30 dBm.

Figure 5(c) shows how calibration points can be moved to increase dynamic range at the expense of linearity. In this case, the calibration points are -4 and -60 dBm. These points are at the end of the device's linear range. Once again, an error of 0 dB is apparent at the calibration points at +25C. Notice also, that the range over which the AD8318 maintains an error of < 1 dB is extended to 60 dB at +25C and 58 dB over temperature. The disadvantage of this approach is that the overall measurement error increases, especially in this case at the top end of the detector's range.

Figure 5(d) shows the post-calibration error using a more elaborate multipoint algorithm. In this case, multiple- output power levels (separated by 6 dB in this example) are applied to the transmitter and measurements are made of the detector's output voltage at each power level. These measurements are used to break the transfer function down into segments with each segment having its own SLOPE and INTERCEPT. This algorithm tends to greatly reduce errors due to detector nonlinearity and leaves temperature drift as the main source of error. The disadvantage of this approach is that the calibration procedure takes longer and more memory is required to store the multiple SLOPE and INTERCEPT calibration coefficients.

In applications where accurate RF power transmission is required, some form of system calibration will generally be required. Modern integratedcircuit (IC)-based RF power detectors, with predictable responses and excellent stability with temperature and frequency, can significantly simplify system calibration and can provide a system accuracy of 0.5 dB or better. The placement and number of calibration points can have a significant effect on the achievable post-calibration accuracy.

About the Author

Eamon Nash | Applications Engineering Director

Eamon Nash is an applications engineering director at Analog Devices. He has worked at ADI in various field and factory roles, covering mixed-signal, precision, and RF products. He’s currently focused on RF amplifiers and beamformer products for satellite communications and radar. He holds a Bachelor of Engineering (B.Eng.) degree in electronics from University of Limerick, Ireland, as well as five patents.

Sponsored Recommendations

Defense Technology: From Sea to Space

Oct. 31, 2024
Learn about these advancements in defense technology, including smart sensors, hypersonic weapons, and high-power microwave systems.

Transforming Battlefield Insights with RCADE

Oct. 31, 2024
Introducing a cutting-edge modeling and simulation tool designed to enhance military strategic planning.

Fueling the Future of Defense

Oct. 31, 2024
From ideation to production readiness, Raytheon Advanced Technology is at the forefront of developing the systems and solutions that fuel the future of defense.

Ground and Ship Sensors for Modern Defense

Oct. 31, 2024
Delivering radars that detect multiple threats and support distributed operations.