2.1. Dynamic Spectrum Access: Signal Processing and ML/DL Approaches
DSA was first considered two decades ago based on the measurements of actual spectrum usage performed by the Spectrum Policy Task Force of Federal Communications Commission (FCC) [
20]. On the contrary to the majorities belief on a well-crowded spectrum, actual measurements indicated that many portions of the spectrum are idle at a given time and location, leading to spectral holes. This results from the fixed spectrum access policy. Under DSA, secondary users (i.e, unlicensed users) were allowed to utilize spectral holes of primary users (i.e., licensed users) without interfering to the operation of primary users through a
cognitive radio [
21,
22,
23,
24], which is an intelligent system built on a software-defined radio. Here, we briefly review
spectrum sensing techniques associated with DSA whereas the reader is referred to [
25,
26,
27] for other aspects of DSA, such as spectrum management and spectrum sharing policies.
Several techniques have been proposed for both narrowband and wideband spectrum sensing. For narrowband sensing, the main non-ML techniques are: energy detection [
28,
29,
30,
31], matched filter based detection [
32,
33,
34], cyclostationary features detection [
35,
36], and covariance based detection [
37,
38,
39]. The energy detection and covariance based detection approaches do not require prior knowledge on the primary user’s signal; however, the former is not reliable at low signal-to-noise ratio (SNR) values. Matched filter based detection provides optimal sensing and better detection at low SNR values, but this technique requires prior knowledge on the primary user’s signal, which may not always be available. The cyclostationary features detection is generally robust against noise uncertainly and works well at low SNR values, however, it suffer from large sensing time. In wideband sensing, as an straight forward approach, a wideband signal may be divided into a set of narrowband signals via temporal DFT or a filter bank [
40,
41,
42,
43] and sense sequentially or parallelly using narrowband sensing techniques. However, sequential sensing requires longer time whereas parallel sensing requires more resources. Because a wideband signal with spectral can be considered as a sparse signal in the frequency domain, several compressive sensing based techniques have been proposed for wideband sensing [
44,
45,
46,
47]. These approaches require lower sampling rate and have lower power consumption in addition to the fast sensing. Thus, compressive sensing based approaches eliminate most of the drawbacks of sequential or parallel narrowband sensing of a wideband signal.
In the last decade, a considerable number of ML and DL techniques have been proposed for spectrum sensing. ML algorithms have been mostly used to determine the channel occupancy patterns of a primary used and to classify the channel as either free or occupied. In [
48,
49,
50,
51,
52], K-means clustering and Gaussian mixture models have been first used to determine the presence of primary users and a classifier such as support vector machine and K-nearest-neighbor have then been employed to determine a channel is free or occupied. DL techniques such as convolutional neural networks and long short-term memory have been employed for spectrum sensing in [
53,
54,
55,
56], which outperformed the conventional techniques. Recently, a transformer-based DL architecture has been proposed in [
57]. This DL approach outperform the previously proposed approaches based on convolutional neural networks. Another DL approach employed for spectrum sensing in a cooperative way is deep reinforcement learning [
58,
59,
60,
61]. These deep reinforcement learning improves the robustness of the spectrum sensing system and allows to make more accurate decisions in dynamic environments. The reader is referred to [
62,
63,
64] for a comprehensive review of spectrum sensing techniques.
Spatio-temporal modeling of cognitive radio systems using multi-dimensional signal processing concepts has been presented in [
65]. Here, directional sensing has been exploited using three-dimensional infinite impulse response (IIR) filters as beamformers. Similar directional sensing approaches using multi-dimensional filters were presented in [
66,
67,
68]. Furthermore, antenna array systems have been employed for directional sensing in [
69,
70,
71]. In [
72], DFT-based multibeamformer has been employed for real-time directional sensing. Compared to the multibeamformer employed with ADFT in this paper, exact DFT has been employed in [
72] with 16 simultaneous beams at 2.4 GHz.
2.2. Radio Astronomy and RFI
Radio astronomy relies on a minimally contaminated radio spectrum to observe the universe [
73]. Detecting and estimating weak celestial signals arriving from far away galaxies are particularly required in achieving the key science goals of the next generation radio telescopes, i.e.
The Square Kilometre Array (SKA) [
74],
The 2030 Atacama Large Millimeter/sub-millimeter Array - Wideband Sensitivity Upgrade (ALMA-WSU) [
75] and
The next-generation Very Large Array (ngVLA) [
76]. Despite being located at remote sparsely populated regions in the world, the increasing intensity, frequency, bandwidth, and occurrences of RFI are threatening the utility of the next generation radio telescopes [
77]. RFI from Global navigation satellite system (GNSS) [
78], terrestrial and airborne wireless communications systems, satellite mega constellations [
79] and other technologies are encroaching on the spectra that have been exclusively used by radio astronomers, making it harder to distinguish faint celestial signals from RFI.
The
International Telecommunication Union (ITU) and the
FCC have important roles in protecting the spectrum for radio astronomy and other passive users. The ITU, through its Radio communication Sector (ITU-R), is responsible for developing international regulations and recommendations for the use of radio frequencies. It recognizes the significance of protecting the spectrum for passive services, such as radio astronomy, Earth exploration-satellite services, and meteorological satellite services. The ITU-R identifies specific frequency bands for these passive services and establishes regulatory provisions to ensure their protection from harmful interference. These provisions include coordination procedures, power limits, and frequency separation requirements to safeguard the sensitive observations and measurements conducted by radio astronomers and other passive users [
80,
81]. Similarly, the FCC in the United States recognizes the importance of protecting the spectrum for radio astronomy and other passive services. The FCC’s Office of Engineering and Technology (OET) formulates policies and rules to prevent harmful interference to these services. It establishes technical standards and licensing conditions that take into account the needs of passive users, including radio astronomers. Also, the FCC works closely with the National Science Foundation (NSF) and other relevant agencies to coordinate spectrum usage and protect the integrity of scientific observations. However, both organizations are under relentless pressure from the industry and governments for more commercial utilization of the spectrum that has been previously allocated for the non-commercial users.
Therefore, ideally, the awareness of the key characteristics of RFI (i.e., frequency range, modulation scheme, source locations, intensity, etc.) would help the radio telescopes to plan and schedule the observations. Nevertheless, this is not practical for all cases of RFI, therefore, records of radio spectral strength in terms of strength, direction, duration, and time of incident would also help the radio astronomers to flag and excise the contaminated observations [
82]. In this case, the
omnipresent perception of the EM environment would be a huge boon in the post processing stage for radio astronomical observations [
83].
2.3. Approximate DFT
The discrete Fourier transform is a linear operator that relates an input
N-point signal
to the output signal
according to the following expression:
where
is the
Nth root of the unity and
. Computed by definition, the DFT presents a computational complexity in
which is prohibitively high for real-time applications.
In practice, the DFT is computed by means of one of many efficient numerical routines—fast algorithms, which are collectively referred to as “FFTs”. Particular popular choices of FFTs are the radix-2-based algorithms, such the Cooley-Tukey FFT, which can recursively decompose the
N-point DFT block in two
-point DFT sub-blocks [
84].
In general, the FFTs are capable of reducing the computational complexity of the DFT computation from
to
[
85]. In many cases [
86,
87,
88,
89,
90,
91], such computational cost reduction achieves the theoretical minimum multiplicative complexity of the DFT [
92]. As a consequence, it is impossible to further reduce the multiplicative complexity. Being a quite mature area of research, the proposition of new FFTs capable of offering significant reductions in computation cost is an unlikely event.
However, noticeable reductions can be accomplished under a fault-tolerant paradigm. If the DFT computation is permitted a given error level, then the computational effort can be re-adjusted to meet the lower, but acceptable, precision level. As a consequence, fewer arithmetical operations are needed for the DFT estimation [
93].
Systematically, a way to obtain such low-complexity DFT estimators is by means of matrix approximation theory. In matrix formalism, the DFT is described by the following operation
where the
N-point DFT matrix, denoted by
, is the
matrix whose elements are given by
,
.
In the context of approximations for discrete transforms, qualitatively, an approximate discrete transform,
, is a transform matrix such that
. In other words, the exact and the approximate transform-domain signals are “close” in some relevant sense to be quantitatively defined. Generally, an approximation transform must satisfy the following conditions: (i) it can largely preserve meaningful properties of the exact transform; (ii) mathematical proximity between the exact and approximate transform matrix according to a contextually relevant metric function—usually a performance figure of merit; and (iii) it possesses a significantly lower computational cost, compared to the exact computation by FFTs. In [
94] an overview of the theory is made available.
Usually, in order to preserve the physical interpretation of the approximate spectrum, approximate transformation matrices are sought to be close in the Euclidean sense to the associate exact transformation matrix. This can be accomplished by the minimization of the Frobenius norm of the difference between the exact and candidate matrices for approximation, subject to the restriction of low-complexity. Therefore, a low-complexity approximation can be obtained by restricting its elements to numerical sets of trivial multiplicands, such as or .
In symbols, a possible formulation for the problem of deriving a DFT approximation is given by [
94]:
where
,
, are the entries of the approximate matrix
,
denotes the matrix Frobenius norm, and
is a normalization matrix to ensure that the basis vectors of
have energy equal or near to one.
In this work, we adopted the ADFT introduced mathematically in [
95] and further elaborated in theory and in hardware in [
96,
97].
Figure 1 shows a comparison of the magnitude responses for the 32-point exact DFT and the 32-point ADFT.
Represented by the low-complexity matrix
, the selected ADFT admits the following factorization:
where
,
, are sparse, low-complexity matrices. In the
Appendix A, we provide the explicit definition of matrices
,
.
For comparison,
Table 1 lists the arithmetic operation count for the exact 32-point DFT computed according to its definition and to the radix-2 Cooley-Tukey FFT. For this operation counting, we assume that (i) the input signal is purely real-valued; (ii) trivial multiplications by 0,
, or
are not counted; and (iii) a multiplication between a real number and a complex number is performed as two real multiplications—not a complex multiplication [
98]. Due to (i) and (iii), the 704 non-trivial multiplicands of
are equivalent to 1408 real multiplications. The evaluation of the number of additions considered the sum of the number of real- and imaginary-part additions. It is also provided the arithmetic complexity for the discussed 32-point ADFT by its definition and by means of its fast algorithm (factorization). The complexity analysis of the radix-2 Cooley-Tukey FFT is detailed in [
98].
Table 2 shows the arithmetic complexity for each individual matrix term of the
factorization. The matrix
does not present arithmetic costs because it consists of only sign-changing and data-swapping (real- and imaginary-part interchanging) operations. The data interchange is a consequence of the multiplication by
j (a rotation by
).