1. Introduction
Quantum Machine Learning, using quantum algorithms to learn quantum or classical systems, has attracted a lot of research in recent years, with some algorithms possibly gaining an exponential speedup. Since machine learning routines often push real-world limits of computing power, an exponential improvement to algorithm speed would allow for such systems with vastly greater capabilities. Google’s ’Quantum Supremacy’ experiment [
1] showed that quantum computers can naturally solve certain problems with complex correlations between inputs that can be incredibly hard for traditional (“classical”) computers. Such a result naturally suggests that machine learning models executed on quantum computers could be more effective for certain applications. It seems quite possible that quantum computing could lead to faster computation, better generalization on less data, or both even, for an appropriately designed learning model. Hence it was of great interest to discover and model the scenarios in which such a “quantum advantage” could be achieved. A number of such “Quantum Machine Learning” algorithms are detailed in papers such as [
2,
3,
4,
5,
6]. Many of these methods claim to offer exponential speedups over analogous classical algorithms. However, on the path from theory to technology, some significant gaps exist between theoretical prediction and implementation. These gaps result in unforeseen technological hurdles and sometimes misconceptions, necessitating more careful case-by-case studies.
In this work, we start from a theoretical abstraction of a well-known technical problem in signal processing through optic fibre communication links. Specifically, problems and opportunities are demonstrated for the k nearest-neighbour clustering algorithm when applied to the real-world problem of decoding 64-QAM data provided by Huawei. It is known from the literature that the k nearest-neighbour clustering algorithm can be applied to solve the problem of phase estimation in optical fibres [
7,
8].
A quantum version of this k nearest-neighbour clustering algorithm has been developed in [
6], promising an exponential speedup. However, the practical usefulness of this algorithm is under debate [
9]. There are claims that the speedup is reduced to only polynomial once the quantum version of the algorithm takes into account the time taken to prepare the necessary quantum states. This work builds upon several observations. First, in any classical implementation of k nearest-neighbour clustering, it is possible to vary the loss function. Second, this observation carries over to hybrid quantum-classical implementations of k-nearest neighbour algorithms which utilize quantum methods only to calculate the loss function. Third, the hitherto insurmountable impairment of existing QRAMs being unsuitable for storing quantum states over several steps in a quantum algorithm due to very poor decoherence times introduces an intractable impracticability. Fourth, the encoding of classical data into quantum states has been proven to be a complex task which significantly reduces the advantage of known quantum machine learning algorithms [
9]. In this work, we, therefore, reduce the use of quantum methods to calculate distance loss functions. We thereby utilise the process of encoding classical data into quantum states by pre-processing the data. We also minimise the storage time of quantum states by encoding the states before each shot and using destructive measurements. In the case of angle embedding, the pre-processing of data before encoding using the unitary is the critical step. This work introduces a method of encoding using the inverse stereographic projection and focuses on its performance on real-world 64-QAM data. We introduce an analogous classical quantum-inspired algorithm. In the portion of this section that follows, we introduce the problem to be tackled - clustering of 64-QAM optic fibre transmission data - as well as the experimental setup used. This section also discusses the related body of work and our contribution to it. In
Section 2 we introduce the preliminaries required for understanding our approach. Furthermore
Section 3 introduces the developed stereographic quantum k nearest-neighbour clustering and quantum analogue k nearest-neighbour clustering algorithms. Afterwards, in
Section 4, we describe the various experiments for testing the algorithms, present the obtained results, and discuss the conclusions from the experimental results.
Section 5 concludes this work and proposes some directions for future research.
1.1. Quadrature Amplitude Modulation (QAM) and Clustering
Quadrature amplitude modulation (QAM) conveys multiple digital bits with each transmission by mixing both amplitude and phase variations in a carrier frequency, by changing (modulating) the amplitudes of two carrier waves. The two carrier waves (of the same frequency) are out of phase with each other by 90
i.e. they are the sine and cosine waves of a given frequency. This condition is known as orthogonality or quadrature. The transmitted signal is created by adding the two carrier waves (the sine and cosine components) together. At the receiver, the two waves can be coherently separated (demodulated) because of their orthogonality. QAM is used extensively as a modulation scheme for digital telecommunication systems, such as in 802.11 Wi-Fi standards. Arbitrarily high spectral efficiencies can be achieved with QAM by setting a suitable constellation size, limited only by the noise level and linearity of the communications channel[
10]. QAM allows us to transmit multiple bits for each time interval of the carrier symbol. The term “symbol” means some unique combination of phase and amplitude [
11].
In this work, each transmitted signal corresponds to a complex number
:
where
is the initial transmission power and
is the phase of
s. The case shown in Equation (
1) is ideal; however, in real-world systems, noise affects the transmitted signal, distorting it and scattering it in the amplitude and phase space. For our case, the received and partially processed noisy signal can be modelled as:
where
is a random noise affecting the overall value of ideal amplitude and phase. This model motivates the use of nearest neighbour clustering for cases when the noise
causes the received signal to be scattered in the vicinity of the ideal signal
s.
In [
6] is proposed an algorithm that solves the problem of clustering N-dimensional vectors to M clusters in
time on a quantum computer, compared to
time for the (then) best known classical algorithm. The approach detailed in [
6] requires querying the QRAM for preparing a ’mean state’, which is then projected using the SWAP test and used to find the inner product between the centroid (by default the mean point) and a given point. However, there exist some significant caveats to this approach. Firstly, this algorithm achieves an exponential speedup only when comparing the bit-to-bit processing time with the qubit-to-qubit processing time. If one compares the bit-to-bit execution times of both algorithms, the exponential speedup disappears [
9,
12]. Secondly, since stable enough quantum memories do not exist, a hybrid quantum-classical approach must be used in real-world applications - all the information is stored in classical memories, and the states to be used in the algorithm are prepared in real time. This process is known as ’Data Embedding’ since we are embedding the classical data into quantum states. This as mentioned before [
12,
13] slows down the algorithm to only a polynomial advantage over classical k-means. However, we propose an approach whereby this step of embedding can be treated as a data preprocessing step, allowing us to achieve an advantage still and make the quantum approach viable. Quantum-inspired algorithms have shown a lot of promise in achieving some types of advantage that are demonstrated by quantum algorithms [
12,
13,
14,
15], but as [
5] remarks, the massive increase in runtime with rank, condition number, Frobenius norm, and error threshold make the algorithms proposed in [
9,
12] impractical for matrices arising from real-world applications. This observation is supported by [
16]. In this work, we develop an analogous classical algorithm to our proposed quantum algorithm to overcome the many issues faced by quantum algorithms. This work focuses on (a) developing the stereographic quantum and quantum-inspired k nearest-neighbour algorithms and (b) experimentally verifying the viability of the stereographic quantum-inspired k nearest-neighbour classical algorithm on real-world 64-QAM communication data.
1.2. Experimental Setup for Data Collection
The dataset contains a launch power sweep of 80 km fibre transmission of coherent 80 GBd dual polarization (DP)-64QAM with a gross data rate of 960Gb/s. In this experiment, the channel under test (CUT) carries an 80GBd dual polarization (DP)-64QAM signal. We use 15% overhead for FEC and 3.47% overhead for pilots and training sequences, so the net bit rate is 800Gb/s (pilots and training sequences are removed in the published dataset). The experimental setup to capture this real-world database is shown in
Figure 1. Four 120 GSa/s digital-to-analog converters (DACs) generate an electrical signal amplified by four 60 GHz 3dB-Bandwidth amplifiers. A tunable 100 kHz external cavity laser (ECL) source generates a continuous wave signal that is modulated by a 32 GHz DP-I/Q modulator. The receiver consists of an optical 90
-hybrid and four 100 GHz balanced photodiodes. The electrical signals are digitized using four 10-bit analog-to-digital converters (ADCs) with 256 GSa/s and 110 GHz. Subsequently, the raw signals are preprocessed by the receiver digital signal processing (DSP) blocks. The datasets were collected in a very short time, corresponding to the memory size of the oscilloscope, which is limited. This is referred to as offline processing. At the receiver, the signals were normalized to fit the alphabet. The average launch power (laser power feed into the fiber) in watts can be calculated as follows:
There are 4 sets of published data with different launch powers, corresponding to different noise levels during transmission: dBm, dBm, dBm, and dBm. Each data set consists of 3 variables:
’alphabet’: The initial analog values at which the data was transmitted, in the form of complex numbers i.e. for an entry (), the transmitted signal was of the form . Since the transmission protocol is 64-QAM, there are 64 values in this variable. The transmission alphabet is the same irrespective of the channel noise.
’rxsignal’: The received analog values of the signal by the receiver. This data is in the form of a matrix. Each datapoint was transmitted 5 times to the receiver, and so each row contains the values detected by the receiver during the different instances of the transmission of the same datapoint. The values in different rows represent unique datapoint values detected by the receiver.
’
bits’: This is the true label for the transmitted points. This data is in the form of a
matrix. Since the protocol is 64-QAM, each analog point represents 6 bits. These 6 bits are the entries in each column, and each value in a different row represents the correct label for a unique transmitted datapoint value. The first 3 bits encode the column and the last 3 bits encode the row - see
Figure 2.
block/.style = draw,rounded corners, fill=white, rectangle, minimum height=2em, minimum width=6em, EDFA/.style = draw, fill=white, regular polygon, regular polygon sides=3,minimum size=1.1cm, sum/.style= draw, fill=none, circle, node distance=0.5cm,color=black,minimum size=14pt, fiber/.style= draw, fill=none, circle, node distance=2cm,color=red,minimum size=20pt,
The data as well as the noise model has been visualised in detail in the
Appendix A. A few key figures have been included here as well for context.
Figure 2 shows the transmission alphabets for all the different channels.
Figure 3 shows the received data (all 5 instances of transmission) for the dataset with the least noise (2.7dBm), and
Figure 4 shows the received data (all 5 instances of transmission) for the dataset with most noise (10.7dBm).
One can see from these figures that as the noise in the channel increases, the points are further scattered away from the initial alphabet. In addition, the non-linear noise effects also increase, causing distortion of the ’shape’ of the data, most clearly visible in
Figure 4 - especially near the ’corners’. The birefringence phase noise also increases with an increase in the channel noise causing all the points to be ’rotated’ about the origin.
Once the centroids have been found and the data has been clustered, as mentioned before, we need to ’de-map’ the analog centroid values and clusters to bit-strings. For this, we need a de-mapping alphabet which maps the analog values of the alphabet to the corresponding bit strings. The de-mapping alphabet is depicted in
Figure 2. It can be seen from the figure that, as in most cases, the points are Gray coded i.e. adjacent points differ in binary translation by only 1 bit. This helps minimise the number of bit errors per symbol error in case of misclassification or exceptionally high noise. In case a point is misclassified, with the most probability it will be assigned to a neighbouring cluster. Since the surrounding clusters differ by only 1 bit, it minimises the bit error rate. Due to Gray coding, the bit error rate is approximately
of the symbol error rate.
1.3. Related Work
A unifying overview of several quantum algorithms is presented in [
17] in a tutorial style. An overview targeting data scientists is given in [
18]. The idea of using quantum information processing methods to obtain speedups for the k-means algorithm was proposed in [
19]. In general, neither the best nor even the fastest method for a given problem and problem size can be uniquely ascribed to either the class of quantum or classical algorithms, as can be seen in the detailed discussion presented in [
5]. The advantages of using local (classical) processing units alongside quantum processing units in a distributed fashion are quantified in [
20]. The accuracy of (quantum) K-means has been demonstrated experimentally in [
21] and in [
22], while quantum circuits for loading classical data into a quantum computer are described in [
23].
Recent works such as [
12] suggest that even the best QML algorithms, without state preparation assumptions, fail to achieve exponential speedups over their classical counterparts. In [
13] it is pointed out that most QML algorithms are incomparable to classical algorithms since they take quantum states as input and output quantum states, and that there is no analogous classical model of computation where one could search for similar classical algorithms. In [
13], the idea of matching state preparation assumptions with
-norm sampling assumptions (first proposed in [
12]) is implemented by introducing a new input model,
sample and query access (SQ access). In [
13] the Quantum K-Means algorithm described in [
6] is ’de-quantised’ using the ’toolkit’ developed in [
12], i.e. a classical algorithm is given that, with classical SQ access assumptions replacing quantum state preparation assumptions, matches the bounds and runtime of the corresponding quantum algorithm up to polynomial slowdown. From the works [
12,
13,
24], we can conclude that the exponential speedups of many quantum machine learning algorithms that are under consideration arise not from the ’quantumness’ of the algorithms but instead from strong input assumptions, since the exponential part of the speedups vanish when classical algorithms are given analogous assumptions. In other words, in a wide array of settings, on classical data, these algorithms do not give exponential speedups but rather yield polynomial speedups.
The fundamental aspect that allowed for the exponential speedup in [
12] vis-á-vis classical recommendation system algorithms is the type of problem being addressed by recommendation systems in [
4]. The philosophy of recommendation algorithms before this breakthrough was to estimate all the possible preferences of a user and then suggest one or more of the most preferred objects. The quantum algorithm promised an exponential speedup but provided a recommendation without estimating all the preferences; namely, it only provided a
sample of the most preferred objects. This process of sampling along with state preparation assumptions was, in fact, what gave the quantum algorithm an exponential advantage. The new classical algorithm obtains comparable speedups also by only providing samples rather than solving the whole preference problem. In [
13], it is argued that the time taken to create the quantum state should be included for comparison since the time taken is not insignificant; it is also claimed that for every such linear algebraic quantum machine learning algorithm, a polynomially slower classical algorithm can be constructed by using the binary tree data structure described in [
12]. Since then, more sampling algorithms have shown that multiple quantum exponential speedups are not due to the quantum algorithms themselves but due to the way data is provided to the algorithms and how the quantum algorithm provides the solutions [
13,
24,
25,
26]. Notably, in [
26] it is argued that there exist competing classical algorithms for all linear algebra subroutines, and thus for many quantum machine learning algorithms. However, as pointed out in [
5] and proven in [
16], there exist significant caveats to these aforementioned results of quantum-inspired algorithms. The polynomial factor in these algorithms often contains a very high power of the rank and condition number, making them suitable only for sparse low-rank matrices. Matrices of real-world data are most often quite high in rank and hence unfavourable for such sampling-based quantum-inspired approaches. Whether such sampling algorithms can be used also highly depends on the specific application and whether or not samples of the solution instead of the complete data are suitable. It should be pointed out that in case such complete data is needed, quantum algorithms generally do not provide an advantage anyway.
The method of encoding classical data into quantum states contributes to the complexity and performance of the algorithm. In this work, the use of the stereographic projection is proposed. Others have explored this procedure [
27,
28,
29] as well; however, the motivation, implementation, and use vary significantly, as well as the procedure for embedding data points into quantum states. There has also been no extensive testing of the proposed methods, especially not in an industry context. In our method, we exclusively use pure states from the Bloch sphere since this reduces the complexity of the application. Theorem 1 assures that our method with existing quantum techniques is applicable for nearest neighbour clustering. In contrast, The density matrices of mixed states and the normalised trace distance between the density matrices are used for binary classification in [
27,
28]. A very important thing to consider here is to distinguish the contribution of the stereographic projection from the quantum effects. We will see in
Section 4 that the stereographic projection itself seems to be the most important contributing factor. In [
30], it is also proposed to encode classical information into quantum states using the stereographic projection in the context of quantum generative adversarial networks. Their motivation for using the Inverse Stereographic projection is due to the fact that it is one-one and can hence be used to uniquely represent every point in the 2-D plane without any loss of information. Angle embedding, on the other hand, loses all amplitude information due to the normalisation of all points. A method to transform an unknown manifold into an n-sphere using stereographic projection is proposed in [
31] - here, however, the property of their concern was the conformality of the projection since subsequent learning is performed upon the surface. In [
32], a parallelised version of [
6] is developed using the FF-QRAM procedure [
33] for amplitude encoding and the stereographic projection to ensure a one-one embedding.
In the method of Spherical Clustering [
34], the nearest neighbour algorithm is explored on the basis of the cosine similarity measure (Eq. (8) and Lemma 1). The cosine similarity is used in cases of information retrieval, text mining, and data mining to find the similarity between document vectors. It is used in those cases because the cosine similarity has low complexity for sparse vectors since only the non-zero coordinates need to be considered. For our case as well, it is in our interest to study Definitions 1 and 2 with the cosine dissimilarity. This, in particular, becomes relevant once we employ stereographic embedding to encode the data points into quantum states.
1.4. Contribution
The subject of this work is the development and testing of the quantum-analogous classical algorithm for performing k nearest neighbour clustering using the general stereographic projection (
Section 3.3) and the stereographic quantum k nearest-neighbour clustering quantum algorithm (
Section 3.1,
Section 3.2). The main contributions of this work are (a) the development of a novel quantum embedding using the
generalised stereographic projection along with proving that the ideal projection radius is not 1; (b) the development of the quantum analogue classical algorithm through a new method of centroid update which yields significant advantage and; (c) the experimental exploration and verification of the developed algorithms. The extensive testing upon the
real-world, experimental QAM dataset (
Section 1.2) revealed some very important results regarding the dependence of accuracy, runtime, and convergence performance upon the radius of projection, number of points, noise in the optic fibre, and stopping criteria - described in
Section 4. No other work has considered a generalised projection radius for quantum embedding or studied its effect. Through our experimentation, we have verified that there exists an ideal radius greater than 1 for which accuracy performance is maximised. The advantageous implementation of the algorithm upon experimental data shows that our procedure is quite competitive. The fact that the developed quantum algorithm has a completely classical analogue (with comparable time complexity to the classical k means algorithm) is a distinct advantage in terms of in-field deployment, especially compared to [
5,
6,
19,
27,
28,
29,
32]. The developed quantum algorithm also has another advantage with respect to NISQ realisations - it has the least circuit depth and circuit width among all candidates [
5,
6,
29,
32] - making it practical to implement with the current quantum technologies. Another important contribution is the ’Distance Loss Function’ approach, where we generalise the distance for clustering; instead of Euclidean distance, we consider other ’distances’ which might be better estimated by quantum circuits (
Section 3.2.3). A somewhat similar approach was developed in parallel by [
35] in the context of amplitude embedding. All previous approaches [
5,
6,
29,
32] only try to estimate the Euclidean distance. We also make the contribution of studying the relative effect of ’quantumness’ and the stereographic projection, something completely overlooked in previous works. We show that the quantum ’advantage’ in accuracy performance touted by works such as [
27,
28,
29,
32] is in reality quite suspect and achievable through classical means. We describe a generalisation of the stereographic embedding - the Ellipsoidal embedding, which we expect to give even better results.
Other contributions of our work include: the generalisation of the k nearest-neighbour problem to clearly indicate the contribution of dissimilarities and dataspace (see
Section 2.1); presenting the procedure and circuit for stereographic embedding using the
angle embedding procedure, which consumes only
in time and resources (
Section 3.1); and demonstrating that for hybrid implementations, the popular SWAP test method can be replaced by the Bell State measurement circuit (
Section 3.2) saving not only a qubit but also a quantum gate.
Figure 1.
Experimental setup over a 80 km G.652 fiber link at optimal launch power of 6.6 dBm. Chromatic disperion (CD) and carrier frequency offset (CFO) compensation, timing recovery (TR) and carrier phase estimation (CPE).
Figure 1.
Experimental setup over a 80 km G.652 fiber link at optimal launch power of 6.6 dBm. Chromatic disperion (CD) and carrier frequency offset (CFO) compensation, timing recovery (TR) and carrier phase estimation (CPE).
Figure 2.
The bitstring mapping and demapping alphabet.
Figure 2.
The bitstring mapping and demapping alphabet.
Figure 3.
The data detected by the receiver from the least noisy (2.7dBm noise) channel. All 5 iterations of transmission are depicted together.
Figure 3.
The data detected by the receiver from the least noisy (2.7dBm noise) channel. All 5 iterations of transmission are depicted together.
Figure 4.
The data detected by the receiver from the noisiest (10.7dBm noise) channel. All 5 iterations of transmission are depicted together.
Figure 4.
The data detected by the receiver from the noisiest (10.7dBm noise) channel. All 5 iterations of transmission are depicted together.
Figure 5.
Stereographic Projection [
36].
Figure 5.
Stereographic Projection [
36].
Figure 6.
Stereographic projection for a sphere of radius ’r’.
Figure 6.
Stereographic projection for a sphere of radius ’r’.
Figure 7.
Quantum circuit of the method, note that it is equivalent to the Bell state measurement set up.
Figure 7.
Quantum circuit of the method, note that it is equivalent to the Bell state measurement set up.
Figure 8.
Distance Loss Function for stereographic embedding with .
Figure 8.
Distance Loss Function for stereographic embedding with .
Figure 9.
Distance Loss Function for stereographic embedding with .
Figure 9.
Distance Loss Function for stereographic embedding with .
Figure 10.
Distance Loss Function for stereographic embedding with .
Figure 10.
Distance Loss Function for stereographic embedding with .
Figure 11.
Distance Loss Function for stereographic embedding with .
Figure 11.
Distance Loss Function for stereographic embedding with .
Figure 12.
A diagram providing a visual intuition for how the stereographic quantum k nearest-neighbour clustering quantum algorithm is equivalent to the quantum analogue k nearest-neighbour clustering classical algorithm.
Figure 12.
A diagram providing a visual intuition for how the stereographic quantum k nearest-neighbour clustering quantum algorithm is equivalent to the quantum analogue k nearest-neighbour clustering classical algorithm.
Figure 13.
Mean testing accuracy v/s ln(Number of points) v/s ln(projection radius) for the quantum analogue algorithm acting upon the 2.7 dBm dataset.
Figure 13.
Mean testing accuracy v/s ln(Number of points) v/s ln(projection radius) for the quantum analogue algorithm acting upon the 2.7 dBm dataset.
Figure 14.
Mean testing accuracy v/s ln(Number of points) v/s ln(projection radius) for the quantum analogue algorithm acting upon the 2.7 dBm dataset, another view.
Figure 14.
Mean testing accuracy v/s ln(Number of points) v/s ln(projection radius) for the quantum analogue algorithm acting upon the 2.7 dBm dataset, another view.
Figure 15.
Mean testing accuracy v/s ln(Number of points) v/s projection radius for the quantum analogue algorithm acting upon the 2.7 dBm dataset, close up.
Figure 15.
Mean testing accuracy v/s ln(Number of points) v/s projection radius for the quantum analogue algorithm acting upon the 2.7 dBm dataset, close up.
Figure 16.
Mean training accuracy v/s ln(Number of points) v/s ln(projection radius) for the quantum analogue algorithm acting upon the 2.7 dBm dataset.
Figure 16.
Mean training accuracy v/s ln(Number of points) v/s ln(projection radius) for the quantum analogue algorithm acting upon the 2.7 dBm dataset.
Figure 17.
Mean training accuracy v/s ln(Number of points) v/s ln(projection radius) for the quantum analogue algorithm acting upon the 2.7 dBm dataset, another view.
Figure 17.
Mean training accuracy v/s ln(Number of points) v/s ln(projection radius) for the quantum analogue algorithm acting upon the 2.7 dBm dataset, another view.
Figure 18.
Mean training accuracy v/s ln(Number of points) v/s projection radius for the quantum analogue algorithm acting upon the 2.7 dBm dataset, close up.
Figure 18.
Mean training accuracy v/s ln(Number of points) v/s projection radius for the quantum analogue algorithm acting upon the 2.7 dBm dataset, close up.
Figure 19.
Heat Map of Mean training accuracy v/s ln(Number of points) v/s projection radius for the quantum analogue algorithm acting upon the 2.7 dBm dataset
Figure 19.
Heat Map of Mean training accuracy v/s ln(Number of points) v/s projection radius for the quantum analogue algorithm acting upon the 2.7 dBm dataset
Figure 20.
Mean no. of iterations in training v/s Number of points v/s log(projection radius) for the quantum analogue algorithm acting upon the 10.7 dBm dataset.
Figure 20.
Mean no. of iterations in training v/s Number of points v/s log(projection radius) for the quantum analogue algorithm acting upon the 10.7 dBm dataset.
Figure 21.
Mean no. of iterations in training v/s Number of points v/s log(projection radius) for the quantum analogue algorithm acting upon the 10.7 dBm dataset, close up.
Figure 21.
Mean no. of iterations in training v/s Number of points v/s log(projection radius) for the quantum analogue algorithm acting upon the 10.7 dBm dataset, close up.
Figure 22.
Mean no. of iterations in training v/s Number of points v/s log(projection radius) for the quantum analogue algorithm acting upon the 10.7 dBm dataset, close up, another view.
Figure 22.
Mean no. of iterations in training v/s Number of points v/s log(projection radius) for the quantum analogue algorithm acting upon the 10.7 dBm dataset, close up, another view.
Figure 23.
Mean testing accuracy v/s the natural log of the number of points, 2.7dBm dataset.
Figure 23.
Mean testing accuracy v/s the natural log of the number of points, 2.7dBm dataset.
Figure 24.
Mean testing accuracy v/s the natural log of the number of points, 6.6dBm dataset.
Figure 24.
Mean testing accuracy v/s the natural log of the number of points, 6.6dBm dataset.
Figure 25.
Mean testing accuracy v/s the natural log of the number of points, 8.6dBm dataset.
Figure 25.
Mean testing accuracy v/s the natural log of the number of points, 8.6dBm dataset.
Figure 26.
Mean testing accuracy v/s the natural log of the number of points, 10.7dBm dataset.
Figure 26.
Mean testing accuracy v/s the natural log of the number of points, 10.7dBm dataset.
Figure 27.
Mean training accuracy v/s the natural log of the number of points, 2.7dBm dataset.
Figure 27.
Mean training accuracy v/s the natural log of the number of points, 2.7dBm dataset.
Figure 28.
Mean training accuracy v/s the natural log of the number of points, 6.6dBm dataset.
Figure 28.
Mean training accuracy v/s the natural log of the number of points, 6.6dBm dataset.
Figure 29.
Mean training accuracy v/s the natural log of the number of points, 8.6dBm dataset.
Figure 29.
Mean training accuracy v/s the natural log of the number of points, 8.6dBm dataset.
Figure 30.
Mean training accuracy v/s the natural log of the number of points, 10.7dBm dataset.
Figure 30.
Mean training accuracy v/s the natural log of the number of points, 10.7dBm dataset.
Figure 31.
Mean testing accuracy gain v/s the natural log of the number of points, 10.7dBm dataset.
Figure 31.
Mean testing accuracy gain v/s the natural log of the number of points, 10.7dBm dataset.
Figure 32.
Mean training accuracy gain v/s the natural log of the number of points, 10.7dBm dataset.
Figure 32.
Mean training accuracy gain v/s the natural log of the number of points, 10.7dBm dataset.
Figure 33.
Mean training iterations v/s the natural log of the number of points, 10.7dBm dataset.
Figure 33.
Mean training iterations v/s the natural log of the number of points, 10.7dBm dataset.
Figure 34.
Mean training iterations gain v/s the natural log of the number of points, 10.7dBm dataset.
Figure 34.
Mean training iterations gain v/s the natural log of the number of points, 10.7dBm dataset.
Figure 35.
Mean testing execution time v/s the natural log of the number of points, 10.7dBm dataset.
Figure 35.
Mean testing execution time v/s the natural log of the number of points, 10.7dBm dataset.
Figure 36.
Mean training execution time v/s the natural log of the number of points, 10.7dBm dataset.
Figure 36.
Mean training execution time v/s the natural log of the number of points, 10.7dBm dataset.
Figure 37.
Mean testing execution time gain v/s the natural log of the number of points, 10.7dBm dataset.
Figure 37.
Mean testing execution time gain v/s the natural log of the number of points, 10.7dBm dataset.
Figure 38.
Mean training execution time gain v/s the natural log of the number of points, 10.7dBm dataset.
Figure 38.
Mean training execution time gain v/s the natural log of the number of points, 10.7dBm dataset.
Figure 39.
Mean overfitting parameter v/s the natural log of the number of points, 10.7dBm dataset.
Figure 39.
Mean overfitting parameter v/s the natural log of the number of points, 10.7dBm dataset.
Figure 40.
Maximum Accuracy v/s iteration number v/s projection radius for the quantum analogue algorithm acting upon the 2.7 dBm dataset.
Figure 40.
Maximum Accuracy v/s iteration number v/s projection radius for the quantum analogue algorithm acting upon the 2.7 dBm dataset.
Figure 42.
A view of
Figure 41 magnified to better depict the algorithm’s behaviour in the region of interest.
Figure 42.
A view of
Figure 41 magnified to better depict the algorithm’s behaviour in the region of interest.
Figure 44.
Probability of stopping v/s projection radius v/s iteration number for quantum analogue algorithm acting upon the 2.7dBm dataset, and with the number of points = 640.
Figure 44.
Probability of stopping v/s projection radius v/s iteration number for quantum analogue algorithm acting upon the 2.7dBm dataset, and with the number of points = 640.
Figure 45.
Probability of stopping v/s projection radius v/s iteration number for quantum analogue algorithm acting upon the 10.7dBm dataset, and with the number of points = 51200.
Figure 45.
Probability of stopping v/s projection radius v/s iteration number for quantum analogue algorithm acting upon the 10.7dBm dataset, and with the number of points = 51200.
Figure 46.
Gain in iteration number for maximum accuracy v/s number of points, 10.7dBm dataset.
Figure 46.
Gain in iteration number for maximum accuracy v/s number of points, 10.7dBm dataset.
Figure 47.
Maximum accuracy gain v/s number of points, 10.7dBm dataset.
Figure 47.
Maximum accuracy gain v/s number of points, 10.7dBm dataset.
Figure 48.
Maximum accuracy v/s number of points, 10.7dBm dataset.
Figure 48.
Maximum accuracy v/s number of points, 10.7dBm dataset.