The test requirements of IEEE 802.16-2004 for WiMAX transmitters and receivers can be incorporated into the experimental tester for evaluating the performance of WiMAX devices.

High-data-rate communications as defined in the WiMAX IEEE 802.16-2004 standard may pave the way for true broadband, multimedia services over wireless networks. Based on orthogonal-frequency-division-multiplex (OFDM) techniques, the WiMAX physical-layer (PHY) and media-access-control (MAC) protocols are outlined in the IEEE 802.16-2004 standard. These protocols have inspired the development of a baseband test transceiver detailed last month in Part 2 of this article series. In this final installment of the article series, some of the different types of measurements possible with the test transceiver will be outlined.

The WiMAX Forum (www.wimaxforum.org) is dedicated to WiMAX interoperability and, as such, is interested in promoting the IEEE 802.16-2004 standard. The group has selected OFDM-256 as an interface format for frequencies below 11 GHz, and that format is the basis for this article series' measurement endeavors. The OFDM-256 air interface provides adaptive functions as well as many optional features. It allows for adaptation of modulation/coding format and adaptation of the cyclic prefix. But in spite of the many optional features, the proposed measurement receiver and test algorithms will work over a wide range of frequencies, bandwidths, and option sets.

The proposed WiMAX measurement receiver (see Fig. 9 in Part 2) integrated the key function blocks needed for baseband WiMAX testing, including a packet detection unit, coarse and fine symbol timing estimation, an inphase/quadrature (I/Q) impairment estimator, frequency-offset correction, channel equalization and estimation, and automatic modulation detection. Employing Fast Fourier Transform (FFT) techniques, the test transceiver employs algorithms specifically designed for making measurements, rather than for operating within a communications network. Part 2 of this article series offered a more detailed look at the different function blocks within the test transceiver and the mathematical basis for some of the estimators.

The IEEE 802.16-2004 standard has provided some transmitter and receiver test requirements. These requirements need to be tested for and conformance demonstrated. Therefore, in the proposed test design, the authors have incorporated ways to measure the requirements enforced in the standard. The following measurement capabilities have been incorporated into the receiver in addition to the ones that will be discussed in more detail shortly:

- Relative constellation error (RCE) in percent or decibels (dB).
- RCE versus symbol number.
- RCE versus subcarrier number.
- Spectral flatness.
- Crest factor (a measure of the peak-to-average power ratio).
- Peak, average, and minimum error vector magnitude (EVM).
- Error vector spectrum/time, including rootmean-square (RMS) error vector.

This list contains standard measurements that are widely discussed in the literature. Other suppliers of test equipment, such as Agilent Technologies (www.agilent.com), Anritsu Co. (www.us.anritsu.com), and Rohde & Schwarz (www.rohde-schwarz.com) are also providing such measurement capabilities in their WiMAX receiver solutions, often in the form of programmable receivers using software-defined-radio (SDR) architectures. Since these measurements are well documented in the literature and in the standard document for 802.16-2004, they will not be covered here. Instead, the following additional measurements will be briefly detailed.

Frequency error is a measurement of the difference of the carrier frequencies generated by the local oscillators at the transmitter and receiver. Rather than the absolute frequency error, the normalized frequency error is a more appropriate value. The frequency error is often measured from the time-domain signal. However, it is also possible to measure the frequency error using frequency-domain samples. In our implementation, we have used time-domain samples for the measurement as described in the previous section.

The sample clock error measurement determines the sampling clock difference at the transmitter and receiver. This measurement in OFDM system is often performed during the pilot tracking period. Since sampling clock introduces a phase rotation that depends on the carrier and OFDM symbol index, variation of the rotation can be used to estimate the sample clock error. Sample clock error is popularly estimated using frequency-domain samples after channel equalization.

Received-signal-strength-indication (RSSI) estimation provides a simple indication of how strong the signal is at the receiver front end. If the received signal strength is stronger than the threshold value, then the link is considered to be good. Compared to other measurements like CINR and BER, RSSI estimation is simple and computationally less complex, as it does not require the processing and demodulation of the received samples. However, the received signal includes noise, interference, and other channel impairments. Therefore, receiving a good signal strength does not tell much about the channel and the signal quality. Instead, it gives an indication whether a strong signal is present or not in the channel of interest.

If the measurement is performed on a wireless channel with a portable measurement device, the received signal power fluctuates rapidly due to fading. In order to obtain reliable estimates, the signal needs to be averaged over a time window to compensate for short-term fluctuations. The averaging window size depends on the system, application, variation of the channel, etc. For example, if multiple receiver antennas are involved at the receiver, the window can be shorter compared to a single antenna receiver. For measurements with a cable connected between the DUT and receiver, this is not an issue. Therefore, even short window of measurements can provide reliable RSSI values.

Carrier-to-interference-ratio (CIR), carrier-to-interference-plus-noise-ratio (CINR), signal-to-interference-ratio (SIR), signal-to-noise-ratio (SNR), and signal-to-interference-plus-noise-ratio (SINR) are the most common ways of measuring the channel quality during (or just after) the demodulation of the received signal. CINR (or SNR, or SINR) provides information on how strong the desired signal is compared to the interferer (or noise, or interference plus noise). Most wireless-communication systems are interference-limited, therefore, CIR and CINR are more commonly used. Compared to RSSI, these measurements provide more accurate and reliable estimates at the expense of computational complexity and with additional delay.

CINR estimation can be employed by estimating signal power and interference power separately and then by taking the ratio of these two. The channel parameter estimates can be used to calculate the signal power. A version of EVM measures the error between what is received and what was expected and can be used for noise-plus-interference-power measurement.

### Page Title

Often, in obtaining the estimates, the impairment (noise or interference) is assumed to be white and Gaussian distributed to simplify the estimation process. However, in wireless-communication systems, the impairment might be caused by a strong interferer, which is colored. For example, in OFDM systems, where the channel bandwidth is wide and the interference is not constant over the whole band, it is very likely that some part of the spectrum is affected more by the interferer than the other parts. Therefore, not only the average CINR, but also, carrier-based CINR and symbol-based CINR are important to provide the quality of the signal received noise for each carrier and for each OFDM symbol.

Since both desired signal's channel and interferer conditions might change rapidly (especially for wireless measurement applications), depending on the application, both short-term and long-term estimates would be desirable. Biterrorrate (BER), symbol-error-rate (SER), frame-error-rate (FER), and cyclic-redundancy-check (CRC) information are examples of the measurements in this category. BER (or FER) is the ratio of the bits (or frames) that have errors relative to the total number of bits (or frames) received during the transmission. The CRC indicates the quality of a frame, which can be calculated using parity check bits through a known cyclic generator polynomial. In FCH decoding, we obtain CRC information. FER can be obtained by averaging the CRC information over a number of frames. In order to calculate the BER, the receiver needs to know the actual transmitted bits, which is not possible in practice. Instead, BER can be calculated by comparing the bits before and after the decoder. Assuming that the decoder corrects the bit errors that appears before decoding, this difference can be related to BER. Note that the comparison makes sense only if the frame is error-free (good frame), which is determined by result of CRC check. In testing a DUT with the standard defined data (specified by the standard), the BER calculation is easy, since it is known what is being transmitted, and it an be compared against what is being received to obtain BER performance.

As mentioned earlier, although these estimates provide excellent measures, reliable estimates of these parameters require observations over a large number of frames. Especially, for low BER and FER measurements, extremely long transmission intervals will be needed. Therefore, for some applications these measures might not be appropriate. Note also that these measurements provide information about the actual operating condition of the receiver. For example, for a given RSSI or CINR measure, two different receivers which have different performances will have different BER or FER measurements. Therefore, BER and FER measurements also provide information on the receiver capability as well as the quality of DUT.

Channel-frequency-response (CFR) estimates provide information about the desired signal's power variation across frequency carriers. It is a much more reliable estimate than RSSI information, as it does not include the other impairments as part of the desired signal power. However, it is less reliable than CINR (or SINR) estimates, since it does not provide information about the noise and/or interference powers with respect to desired signal's power. However, for white noise (like AWGN), channel-frequency-response estimate can also provide an idea about CINR expected at each carrier, and hence expected EVM.

For wireless measurements, CFR provides information about the dispersion (selectivity) of the medium. For measurements where the receiver is connected to the DUT with cable, CFR can provide an idea about the filter responses used at the transceivers. CFR is also useful for measuring spectral flatness, which is a mandatory measurement required by the standard. Measurements on I/Q require a detailed discussion for multicarrier systems and will be the subject of a future report.

The accuracy of the receiver algorithms affect the measurement performance. For example, if the channel estimation algorithm is not designed properly, one might observe worse spectral flatness measure, and then, might conclude that the filters that are used at the transmitter do not have good spectral properties. However, the problem might not be the filter that is used at the transmitter or receiver, it could very well be the channel-estimation algorithm that is used. Similarly, one might observe large EVM at the constellations due to the improper receiver algorithm design. Ideally, we would like EVM to reflect errors due to the device under test (DUT), not due to the low-performance receiver algorithms. Therefore, in testing and measurement world, it is desired to implement "THE" optimal performance receiver algorithms to reduce the errors contributed by these algorithms. On the other hand, we have to be careful not to increase the computational complexity and measurement delay. Both fast and accurate measurements are preferable.

For the same reason, when measuring a DUT, the other impairments caused by other parts of the transceiver chain must be compensated (or calibrated). The calibration or compensation should be done in such a way that we don't compensate the impairments caused by the DUT unintentionally.

Another important point is regarding to the location in the receiver chain where a particular measurement should be performed. Ultimately, the goal is to identify the impairment caused by the DUT using the measurements. If the receiver algorithms are correcting or changing the structure of the impairment, then, reliable measurements are not possible. For example, if there is a sample clock compensation algorithm at the receiver, and if we try to measure the sample clock error after the compensation, we will not be able to identify the error. This was an obvious and easy example, but, for some tricky measurements like I/Q impairments measurements, one has to know where to make the measurement to obtain the most accurate results. It is possible to make a measurement in two different locations and get similar results. However, in most cases, there is a preferable point where a specific measurement makes most sense for obtaining better performance, less computational complexity, and for other possible reasons.

*Editor's Note: This is the last installment of a three-part article series on WiMAX measurements. For the previous two articles, please refer to the July and August 2006 issues of *Microwaves & RF *or visit the magazine website at *www.mwrf.com

ACKNOWLEDGMENT

The authors would like to thank Dr. Larry Dunleavy for his comments and for the review of this article prior to publication.