What you’ll learn:
- Rather than using a parallelized approach to simulation, consider higher abstraction instead.
- AI is proving to be a useful tool in solving problems with large computational complexity.
- Be cognizant of the tradeoffs arising from simulation at higher levels of abstraction.
More often these days, wireless-related design projects are migrating to the rarefied reaches of the millimeter-wave (mmWave) bands, where systems deliver wider signal bandwidths. You get the advantages of higher throughput, but you also must bear new requirements for wideband linearity and frequency flatness. With carrier aggregation comes bandwidths of hundreds of megahertz, and with them comes phase compensation, equalization, and active linearization algorithms that must be tightly integrated with RF transceivers.
All of the above implies a new world for system architects, who must explore and coordinate the design and implementation of multiple system elements. Antenna arrays, RF transceivers, and digital-signal-processing (DSP) algorithms operate across multiple standards and in many scenarios that involve interfering signals.
In this, the final entry in a series of three videos exploring the arena of mmWave system design, Giorgia Zucchelli, product manager for RF and mixed-signal at MathWorks, will cover the intricacies of analyzing these complex mmWave systems. In addition to design and implementation issues, she’ll cover assessment of model fidelity and other aspects of system simulation.
Read More About Antenna Design and Software
Design Considerations for Large Antenna Arrays
Let’s consider a system with 1,000 antenna elements, which presents significant challenges on numerous levels—implementation being among the most difficult. For example, there’s the physical configuration of the system:
- Does one use arrays, or subarrays?
- How do you account for power amplifiers?
- How will you feed this massive network of antennas?
- What are the thermal characteristics and constraints?
- Will beamforming be implemented in digital, analog, or a hybrid approach?
This presents a design problem that's so tightly interconnected from both the technology and signal-processing perspectives whereby the two aspects can't be separated. All of the above implies an extremely computationally complex design and analysis challenge. Not only must you simulate the 1,000 elements with all of their complex interactions, but you would want to simulate them along with the RF front end and the signal-processing chain, possibly using an aggregation algorithm.
An array of 1,000 elements would enable generation of highly focused and narrow beamwidths, which makes it much more robust in terms of interference. But it also implies great sensitivity to errors in the array configuration and the positioning of elements within the array. It’s nearly compulsory to perform array analysis and simulation with the DSP algorithm in play to correct such errors.
Balancing Model Fidelity and Simulation Speed
This gives rise to questions regarding modeling of such a system—not only in terms of how to go about it, but also in terms of the fidelity of the models used in the simulations. Every model has some finite level of fidelity. Thus, modeling, simulating, and analyzing such large arrays can be achieved in a brute-force fashion, but it's very expensive from the perspectives of both resources and time.
Even with a very powerful processor and lots of memory, not all problems can be processed in parallel. It’s tempting to think that for a large array, one could in simulation parallelize the signals feeding each antenna and process them in that fashion. Eventually, though, the data must come together, and there’s an overhead involved with distributing that parallel data and recombining it.
Thus, modeling the system at a higher level of abstraction is highly recommended. For one thing, models can be developed more quickly and allow for more iterations. You can quickly do a first-pass simulation that will provide information on the design which could be used in further refinement.
Raising the abstraction level can have other practical consequences. For a very large array, one might use an infinite-array approach rather than performing a full-wave electromagnetic analysis. You can consider just one element in the array and treat it as being embedded into an infinitely large lattice of reproductions of itself.
Or one might model each of the components that comprise, say, a filter and amplifier, or a demodulator. Then we can combine those components as an equivalent macro-behavioral model. Rather than having four or five components generating noise, we can treat them as a single equivalent noise source that generates the same noise as the entire cascade. It’s an approximation, but an acceptable one in many cases.
Because everything is an approximation, there’s no right or wrong amongst these tradeoffs. Designers should try using different models that operate at different levels of abstraction. You might find that you’ve built the models to have only noise generation at the input or no linearity model at the output.
Then there’s the issue of assessing the coupling amongst array elements. In a very large array, do you really need to account for all of the coupling between each individual element? Perhaps one can consider only the coupling to the next two adjacent elements, either to the right and left or to the top and bottom, depending on the array’s configuration.
The Role of AI in Modeling Complex MIMO Systems
As with other aspects of the move by designers to mmWave frequencies, artificial intelligence is a new tool that can help in solving problems with a large computational complexity or cost. For example, designing an array with optimized performance for a given bandwidth and frequency involves a great deal of electromagnetic (EM) analysis.
AI can help with building surrogate models that speed up the computation of the objective function. So rather than performing an EM analysis at every step of the optimization, you perform only a few EM analyses and then build a faster surrogate model that can be used to compute the objective function. This isn't a new application of AI; it’s just being used more frequently.
It’s now possible to use AI to build pre-trained networks and really solve antennas. MathWorks offers a tool box in MATLAB that provides a trained AI network. It allows you to analyze the antenna without even performing any electromagnetic analysis. Thanks to a catalog of antennas, we know the basic shape of the array and have pre-solved the antenna. This works in specific geometries at this time, but it’s a starting point for a capability with lots of room for growth.
AI is also useful in creating beamsteering algorithms for these very large arrays. With such narrow beams, the system becomes very sensitive to directional errors.
Reinforcement learning techniques are now being applied to array calculations and corrections. For example, based on received signal strength, the beam can be pointed more accurately. Reinforcement learning can be thought of as “trial and error on steroids.” It’s a very intuitive approach and an emerging area for more analog and RF applications of AI.
More on Abstraction Tradeoffs in Modeling mmWave Systems
In addition to those discussed above, more tradeoffs still must be considered when using elevated abstraction levels to simulate large, complex mmWave systems. For example, there are tradeoffs on the implementation level. One might need to trade off a spectral occupation against power, with computational resources in terms of digital signal processing, and against the performance.
Implementation tradeoffs include:
- What is the latency of the system?
- What is its range?
- How narrow is the beam?
- How large is the array?
Among the tradeoffs on the modeling side:
- Model fidelity vs. simulation/analysis speed
- Time vs. compute resources
- Simulation performance vs. model fidelity
- Modeling each component vs. modeling for overall performance
Elevating Simulation Techniques: Multi-Carrier Circuit-Envelope Approaches
Still other tradeoffs arise in performing multi-carrier, circuit-envelope simulations: How does one effectively balance model fidelity and speed when simulating out-of-band interferers?
While being a very useful technique, circuit-envelope simulation is itself a tradeoff. It can perform transient simulation or real passband simulation as well as handle everything from DC up to the range of frequencies of interest. Thus, if you're simulating your system in the gigahertz range, that means a very small simulation time step. It's a very accurate technique, albeit time-consuming and expensive.
On the opposite side of the spectrum is an abstraction called equivalent baseband modeling. Here, you model only the spectrum close to the signal of interest. This is suitable for most narrowband systems. Circuit-envelope simulation lets you adopt the equivalent baseband abstraction but extend it to multiple bandwidths, which constitutes the multi-carrier, circuit-envelope simulation.
This form of simulation is based on harmonic balance as a means of analyzing RF systems. In this context, the circuit envelope can cover the entire range of abstraction. It could be reduced to full, real passband, or real transient by simply setting one carrier frequency with a very large spectrum, or to equivalent baseband, where again you have a single carrier frequency but a relatively narrow spectrum.
The circuit-envelope tradeoff that the modeler must decide on is how many harmonics or mixing products to look at that are generated by nonlinear components. The tradeoff is a straightforward one: If, for example, you want to look at three harmonics, then you perform three simulations. If you double your bandwidth, you reduce your time step by a factor of two, which means it would take twice as long. The number of frequencies is inversely linear to the bandwidth or time step.
While a very powerful, yet straightforward, technique, multi-carrier, circuit-envelope simulation can be tricky. The question is: How many harmonics to simulate and how much bandwidth is required?
Tackling Nonlinearity: Understanding Harmonic Balance
While simulations for linearity are straightforward, there are tradeoffs with multi-carrier, circuit-envelope simulations when it comes to non-linearity. For example, consider a simulation scenario in which you’re sending a small signal through an amplifier. Because that signal is relatively small, it doesn’t excite the amplifier’s nonlinear characteristics. Here, you can keep your harmonic range relatively low.
In contrast, if your signal is of higher power, then it will excite more of the nonlinearity of the power amplifier. Therefore, you will need a higher order of nonlinearity. Here, things become a bit trickier when you might have a signal that's small for 90% of the time but becomes large 10% of the time.
Let’s say that you drive your amplifier with a 5G signal of around 5 GHz in frequency. That would imply harmonics at 10, 15, 20, and perhaps 25 GHz, depending on the signal power. If you’re only interested in what happens at 5 GHz, a harmonic order of three is sufficient to obtain acceptable results from the simulation.
However, if the third-order harmonic is something of interest, you’ll need to push the simulation out to perhaps the 7th- or 9th-order harmonic. These are the tradeoffs one must consider. Understanding how harmonic balance allocates power through these various harmonic orders is a fundamental rule of thumb.
Best Practices for Simulating Mixing Effects
Simulating mixing effects begets other key considerations and challenges, especially if including characteristics such as reciprocal mixing. It’s important to begin with a good frequency plan for static analysis, and to be armed with information such as intermodal tables. Once static analysis is complete, then one can move into the simulation domain to verify that your assumptions are valid.
A good practice is to start simulations with a higher harmonic order and then reduce it. If the results don't change much, that’s good news. It tells you that you can count on the harmonic order and/or cut down on the number of intermodal products. There’s some trial and error involved.