Scaling capacity is a critical aspect in managing communications networks. Exponential growth of user-generated traffic is impacting mobile networks, as bandwidth-hungry smartphones and wireless tablets become more commonplace. Microwave technology is used extensively in mobile backhaul networks, and it is certainly not immune to the effects of this ever-growing demand for bandwidth. As a result, scaling the capacity of these microwave networks is an essential part of their overall operational management, both at the radio and at the packet layer—this latter domain being specific to packet microwave systems.

Modern microwave networks can no longer be managed by optimizing spectrum efficiency on a link-by-link basis. A network-based approach is mandatory. Such an approach avoids performing optimizations that are only valid on a small scale. It also helps network operators to pursue a reduction in the quantity of spectrum used, which leads to a savings in the associated rights of use paid by operators for a certain spectrum allocation. To better understand effective microwave network management, Alcatel-Lucent performed analysis on two strategies to scale network capacity, modeling different scenarios on an operational microwave network. The results reveal how scaling capacity can be managed for more effective network operation.

Analyzing Capacity Mechanisms

The two strategies are based on different technologies. The first is based on high-order quadrature amplitude modulation (HQAM), using higher-order modulation to maximize spectrum efficiency over a microwave communications channel. The second strategy employs packet compression mechanisms; in a full packet-based environment, compression reduces the overhead introduced by a frame or packet structure for microwave transmission, contributing to an increase in spectral efficiency.

HQAM is designed to increase the density of modulation symbols in a transmitted constellation. If 256-state quadrature amplitude modulation (256QAM) is considered a reference format, then 512QAM and 1024QAM formats can provide combined sequential gain of about 25% in useable traffic capacity compared to the reference. Meanwhile, 2048QAM and 4096QAM formats can deliver improvements on the order of an additional 15% over 512QAM and 1024QAM.

Packet compression acts at all layers of an Internet protocol (IP) packet. In the process, several fields belonging to Ethernet, Multiprotocol Label Switching (MPLS), IP, and the upper-layer header are stripped prior to transmission and rebuilt at the receiving end of a microwave link. This saves on the number of bits sent across the link. The effectiveness of packet compression depends on the traffic mix and conditions, making it difficult to calculate an average figure. For an Internet mix (IMIX) traffic profile1 based on IPv4, if the gain is on the order of 30% to 40%, it is almost doubled for a similar profile based on IPv6 capacity increase (see, for example, ref. 2). As the transition in networks is made from IPv4 to IPv6, scaling gains brought about by packet compression will become even more significant.

Link Versus Network Spectrum Efficiency

Spectrum efficiency is often measured on isolated links, where “isolated” refers to a link that is not impaired by interference or disturbance by neighbor radios, and is itself not a source of interference. Unfortunately, this provides information under ideal conditions and not under realistic network conditions, where interference may be commonplace. The goal of practical network design does not coincide with optimization of a single link. An optimal network design should provide requested capacity with minimal occupied spectrum. The reason for this is twofold: spectrum is a limited resource, and spectrum has an associated price for its use. With less spectrum used,more savings in operating expenditure (OPEX) are realized for a network.

This network analysis approach helps network operators to more practically use the spectrum they own: reusing frequency, avoiding unused portions of spectrum, and minimizing generation of interference so that neighboring links or networks are not impaired. The network approach (rather than the single-link approach) poses more constraints on increasing capacity, since a decision on increasing the modulation format would be based more on knowledge of the environment than in the past. For example, such knowledge would imply to avoid the use of higher-order modulation formats at dense nodal/hub points or locations most exposed to impairment, while last-mile links are less affected.

Analyzing Capacity Increases

To better understand how higher-order modulation and packet compression can impact scaling capacity in wireless communications networks, an operational European mobile backhaul network was analyzed. It consists of 890 links. The largest group of these, comprised of 146 links, falls into the 38-GHz band and includes both last-mile connections (or “tails”) and nodal links.  The analysis was performed in order to:

1. Define the “theoretical” maximum throughput possible in the network, which will help determine the maximum capacity supported by the network without touching any network components.

2. Determine the limits of the network before redesign is needed to support HSPA+/Long Term Evolution (LTE) services.

3. Provide a guideline for adopting a technology or combination of technologies that can increase network capacity.

1. This graph shows the distribution of channels in the 38-GHz frequency band.

Figure 1 shows the 38-GHz communications band and how channels are distributed across that portion of the frequency spectrum. It is also a starting point for this network analysis. According to current spectrum utilization, the total throughput offered by the microwave network as a whole is around 1.9 Gb/s. All links employ fixed modulation to support a network availability equal to 99.999% (5 minutes outage per year). From this starting point, two strategies—HQAM and packet compression—are analyzed to scale capacity with the limitation that incremental capital expenditure (CAPEX) and OPEX is avoided.