Algorithms to Antennas: Spectrum Sensing Using Deep-Learning Techniques
This blog is part of the Algorithms to Antenna Series
What you'll learn:
- Identifying 5G NR and LTE signal via spectrum sensing using deep-learning techniques.
- How to characterize spectrum occupancy by training a neural network.
- Testing network signal identification performance.
We have covered a range of specific topics that relate to wireless applications using deep learning to perform a function or improve overall system performance. A few of our most recent posts are:
- Train Deep-Learning Networks with Synthesized Radar and Communications Signals
- RF Fingerprinting for Trusted Communications Links
- Develop and Test Algorithms on Commercial Radars
- Labeling Radar and Comms Signals for Deep-Learning Apps
- 5G Channel Estimation Using Deep-Learning Techniques
In this blog, we look at spectrum sensing employing deep-learning techniques to identify 5G NR and LTE signals. Spectrum sensing offers an important way to understand spectrum usage in crowded RF bands. We show how this technique can identify the type of signal being received in addition to the signal’s occupied bandwidth. While we focus on 5G NR and LTE signals, this type of workflow can be extended to other RF signals, including radar. For example, think of airport radar signals transmitting in the vicinity of a 5G base station.
We follow a recipe similar to the one used for some of our other deep-learning-related posts, starting with synthesizing labeled datasets for 5G and LTE signals. We use this data to train a semantic segmentation network using a deep-learning network.
Our first goal is to characterize spectrum occupancy, so we train a neural network to identify 5G NR and LTE signals in a wideband spectrogram. As demonstrated in the blogs listed above, we always try to test the examples with a radio or radar whenever it’s practical. Keeping with this theme, we tested our network, which was trained using synthesized data with over-the-air (OTA) data collected from a software-defined radio (SDR).
Many techniques can be used to input data from radios and radars to a deep network. In some of our examples from past blogs, we show how this can be done with baseband, in-phase, and quadrature samples. Here, we borrow semantic segmentation techniques used in computer-vision applications to identify objects, along with identifying their locations within images generated from our training data. For wireless-signal-processing applications, the “objects” of interest are wireless signals, where the locations of the objects are based on the frequency and time occupied by the signals.
We synthesize our training signals and use channel and RF impairment models to ensure the data matches what the trained deep network will encounter when tested with OTA signals. The trained network is based on frames that contain only 5G NR or LTE signals. These signals are randomly shifted in frequency within a band of interest. Each frame is 40 ms long, which corresponds to a duration of 40 subframes. For this example, the network assumes that the 5G NR or LTE signal occupies the same band for the whole frame duration.
A sampling rate of 61.44 MHz was used—this rate is high enough to process most of the latest standards-based signals. Several commercially available, low-cost SDR systems also can sample at this rate, which made it possible to use for testing the network with a radio.
Table 1 lists the 5G NR variable signal parameters with multiple bandwidth and sub-carrier settings, along with the LTE variable signal parameters for different reference channels and bandwidths we used to synthesize our training data.
Table 2 shows a summary of the impairments employed in our 5G CDL and LTE fading channel models.
We generated spectrogram images from our synthesized complex baseband signals to convert the signals into images that represent the time-frequency domain. Figure 1 shows a random subset of time-frequency “tiles” from the training frames. You can see that we have a variety of SNR values, bandwidths, and band occupancy.
We used 80% of the single signal time-frequency images from the dataset for training and 20% of the signal images for validation. A semantic segmentation neural network was created based on resnet50, a common network architecture.
Our next step is to test the network signal identification performance using synthesized signals that contain both 5G NR and LTE signals. Figure 2 shows the normalized confusion matrix for all test frames as a heat map. The results are positive as most of the network predictions match the ground truth.
Figure 3 shows the received spectrum, true labels, and predicted labels for one of the resulting images.
As noted earlier, the plots above were generated from testing that used synthesized data. To see how the trained network performed with OTA signals, we used an ADALM-PLUTO radio and captured signals from a nearby base station. Figure 4 shows the results when LTE signals are sent through the network. Figure 5 shows the results when 5G NR signals are sent through the network.
The trained network can distinguish 5G NR and LTE signals including two example captures from real base stations. The network may not be able to identify every captured signal correctly. However, it’s straightforward to enhance the training data by either generating more representative synthetic signals or capturing OTA signals and including these in the training set.
To learn more about the topics covered in this blog and explore your own designs, see the examples below or email me at [email protected]:
- Spectrum Sensing with Deep Learning to Identify 5G and LTE Signals (Example Code): Learn how to train a semantic segmentation network using deep learning for spectrum monitoring. Verify the results using over-the-air testing with an SDR.
- Test a Deep Neural Network with Captured Data to Detect WLAN Router Impersonation (Example): Learn how to train an RF fingerprinting convolutional neural network (CNN) with captured data. You can capture WLAN beacon frames from real routers using a software-defined radio (SDR).
- Radar Waveform Classification Using Deep Learning (Example): Learn how to classify radar waveform types of generated synthetic data using the Wigner-Ville distribution (WVD) and a deep CNN.
Rick Gentile is Product Manager, Ethem Sozer is Principal Engineer, Jameerali Mujavarshaik is Senior Engineer, and Honglei Chen is Principal Engineer at MathWorks.