Preprint
Review

Classical and Quantum Physical Reservoir Computing for Onboard Artificial Intelligence Systems: A Perspective

Altmetrics

Downloads

258

Views

125

Comments

0

A peer-reviewed article of this preprint also exists.

Submitted:

17 June 2024

Posted:

17 June 2024

You are already at the latest version

Alerts
Abstract
Artificial intelligence (AI) systems of autonomous systems such as drones, robots and self-driving cars may consume up to 50% of total power available onboard, thereby limiting the vehicle’s range of functions and considerably reducing the distance the vehicle can travel on a single charge. Next-generation onboard AI systems need an even higher power since they collect and process even larger amounts of data in real time. This problem cannot be solved using the traditional computing devices since they become more and more power-consuming. In this review article, we discuss the perspectives of development of onboard neuromorphic computers that mimic the operation of a biological brain using nonlinear-dynamical properties of natural physical environments surrounding autonomous vehicles. Previous research also demonstrated that quantum neuromorphic processors (QNPs) can conduct computations with the efficiency of a standard computer while consuming less than 1% of the onboard battery power. Since QNPs is a semi-classical technology, their technical simplicity and low, compared with quantum computers, cost make them ideally suitable for application in autonomous AI system. Providing a perspective view on the future progress in unconventional physical reservoir computing and surveying the outcomes of more than 200 interdisciplinary research works, this article will be of interest to a broad readership, including both students and experts in the fields of physics, engineering, quantum technologies and computing.
Keywords: 
Subject: Physical Sciences  -   Applied Physics

1. Introduction

The distance between the modern world and the world depicted in science fiction has narrowed dramatically in the last two decades. However, despite the impressive progress in the development of unmanned ground vehicles (UGVs), unmanned aerial vehicles (UAVs) and underwater remotely operated vehicles (ROVs) [1,2,3], safe and efficient fully autonomous AI-controlled machines and robots remain elusive [4,5].
One of the major problems faced by the developers of autonomous platforms is high power consumption by hardware and computers [6,7] that enable AI systems to communicate with the environment in a manner that mimics human reasoning and perception of the world [8]. Indeed, the rapid progress in the development of AI systems is expected to further increase the global energy consumption since approximately 1.5 million AI server units are expected to be commissioned per year by 2027. Combined together, these devices will consume approximately 85.4 terawatt-hours of electric power annually [6,7].
Several strategies have been suggested to resolve the problem of high energy consumption by AI systems. In particular, it has been demonstrated that AI systems themselves can optimise the energy consumption by forecasting supply and demand, helping prevent grid failures and increase reliability and security of energy networks that support the operation of large data centres [9,10]. There is also a consensus in the research and engineering communities that many AI systems of the future will operate adhering to the principles of Green AI designed to reduce the climate impact of AI [7].
However, even though a singe self-driving car would need to store and process small datasets compared with large amounts of data processed in data centres, the hardware of its computer may consume up to 50% of total power available onboard of the vehicle, thus dramatically reducing the distance the vehicle can travel on a single charge [11,12,13]. Subsequently, novel approaches to the development of AI systems for autonomous vehicles need to be developed, taking into account such restrictions as the weight and size of onboard hardware and batteries [14,15]. The latter is especially the case of specialised flying and underwater drones that must be able to reliably carry sophisticated and often power consuming equipment for long distances in complex natural environments and adversarial conditions [1].

2. What Is This Review Article about?

In this article, we approach the problem of energy-efficient AI systems from a different point of view. We suggest a novel approach to the development of energy-efficient AI systems for autonomous vehicles and discuss the recent relevant academic publications. As schematically illustrated in Figure 1a–c, we propose to create AI systems that would exploit physical phenomena that either surround human-made machines, including aeroplanes, boats and cars, or are induced by their movement. For example, such AI systems would do certain computations employing turbulence in the atmosphere that forms behind an aeroplane or UAV [16,17]. Similarly, in the case of a boat or a submersible ROV, an AI system could use water waves and other fluid-mechanical phenomena such as the formation of bubbles [18,19] and vortices [20,21] in water. We also demonstrate that the physical phenomena illustrated in Figure 1a–c can enable unconventional [22,23], neuromorphic [24,25,26,27,28,29,30,31] and approximate [32,33,34,35,36] computing systems that are expected to play an increasingly important role in the development of autonomous vehicles [37].
Unconventional computing is an approach to computer engineering that exploits the physical properties of mechanical [22], fluid-mechanical [23,38] and living [39,40] systems to perform computations. This category also includes computer architectures that are based on electron devices that are not employed in mainstream digital and analogue computers and intended to enrich the traditional Von Neumann computer architecture and the Turing machine approaches [22,23].
Neuromorphic computers can be regarded as a subclass of unconventional computers that are built using some of the operating principles of a biological brain [41,42,43,44]. While a neuromorphic computer may not be as universal as a traditional digital computer, it can solve certain practically important problems using readily affordable computational resources. Such an advantage originates from an inherent scalability, parallelisation and collocation of data processing and memory attainable in neuromorphic architectures [30]. Since a neuromorphic computer mimics the operation of a biological brain, it operates mostly when input data are available, optimising energy consumption and decreasing the cost of computations [41,42].
The advantages of neuromorphic computers make them ideally suitable for applications in onboard AI systems [45]. Indeed, traditionally designed onboard AI systems have certain constraints that can introduce significant limitations to the vehicle design, creating a gap between the AI backed by high-performance computers and AI deployed in an autonomous vehicle. For example, while high-performance AI systems rely on energy-consuming multi-processor computations, systems designed for onboard use can do mostly basic calculations. Yet, even though an increase in the computational power of onboard systems is achievable by increasing the number of power supply batteries, this approach is impracticable for lightweight systems such as small and long-range drones [46]. On the contrary, neuromorphic computers can do approximate and energy efficient computations using limited resources [32,33,34,35,36], also exploiting computational errors as a mechanism of enhancing the efficiency and additionally saving energy stored in the batteries [30].
Whereas the forthcoming discussion mostly focuses on AI systems based on the paradigm of reservoir computing [43,44,47,48,49], a computational framework derived from recurrent neural network [41,42], the ideas and models reviewed by us can be applied to other kinds of neural network models and AI systems, including those based on deep neural network architectures [50]. In particular, a large section of this article is dedicated to quantum neuromorphic computing systems (schematically illustrated in Figure 1d) since their ability to make accurate forecasts using just a few artificial neurons [51,52,53,54,55,56,57,58], alongside a small footprint and low energy consumption, perfectly aligns them with our vision of the application of approximate neuromorphic computing in mobile AI systems.
The remainder of this article is organised as follows. We first review the principles of operation of reservoir computing systems, focusing on the computational algorithm known as physical reservoir computing. Then, building our discussion around the concepts illustrated in Figure 1, we provide a number of specific examples of onboard AI systems that exploit turbulence, water waves, surface roughness and quantum-mechanical effects.

3. Reservoir Computing

3.1. Traditional Reservoir Computing Approach

Neural networks employed in AI systems usually consist of thousands or millions of interconnected processing units [59,60,61]. Each node has connections of varying strengths to other nodes in the network. The strengths of the connections is updated when the network learns to analyse large datasets.
This algorithm enables the AI system to mimic the operation of a biological brain that exploits a large and complex network of neural connections. Since the brain is also a dynamical system that exhibits complex nonlinear and sometime chaotic behaviour [62,63] (a dynamical system is a system, of either artificial, physical or biological origin, that constantly changes its state in time [64]), it has been demonstrated that a neural network can be constructed using the principles of nonlinear dynamics [31,41,42]. Since in the realm of mathematics a dynamical system also employs differential equations, different kinds of nonlinear differential equations have been adopted to describe the variation of the connection strength between units of an artificial neural network [41,42,43].
This approach to the design of artificial neural network is called reservoir computing (RC) [38,44,47,48,49,65]. In a traditional RC algorithm (Figure 2a), the differential equation that governs the nonlinear dynamics of the system of randomly connected nodes can be written as [43,66,67]:
x n = ( 1 α ) x n 1 + α tanh ( W i n u n + W x n 1 ) ,
where n is the index denoting entries corresponding to equally-spaced discrete time instances t n , u n is the vector of N u input values, x n is a vector of N x neural activations, the element-wise operator tanh ( · ) is a sigmoid activation function [60], W i n is the input matrix consisting of N x × N u randomly generated elements, W is the recurrent weight matrix containing N x × N x randomly generated elements and α ( 0 , 1 ] is the leaking rate that controls the speed of the temporal dynamics.
Using Eq. (1), one calculates the output weights W o u t by solving a system of linear equations Y t a r g e t = W o u t X , where the state matrix X and the target matrix Y t a r g e t are constructed using, respectively, x n and the vector of target outputs y n t a r g e t as columns for each time instant t n . Usually, the solution is obtained in the form W o u t = Y t a r g e t X ( X X + β I ) 1 , where I is the identity matrix, β is a regularisation coefficient and X is the transpose of X [67]. Then, one uses the trained set of weights W o u t to solve Eq. (1) for new input data u n and compute the output vector y n = W o u t [ 1 ; u n ; x n ] using a constant bias and the concatenation [ u n ; x n ] [43,67].
RC systems have been successfully applied to solve many important problems, including the prediction of highly nonlinear and chaotic time series, modelling the variation of climate, understanding trends in financial markets and optimisation of energy generation [38,43,47,48,49,67,68,69,70]. Importantly, similarly to the efficiency of the brain of some insects, which has a low number of neurons compared with a human brain, RC systems require just several thousands of neurons to undertake certain tasks more efficiently than a high-performance workstation computer running sophisticated software [36,38]. This property is ideally suitable for the development of AI systems for mobile platforms, where the control unit must consume low power while delivering practicable machine learning, vision and sensing capabilities in a real-time regime [71].

3.2. Physical Reservoir Computing

Many physical systems and processes exhibit nonlinear dynamical behaviour and, therefore, can also be described by differential equations [64]. Hence, as illustrated in Figure 2b, it was suggested that a computationally efficient reservoir can be created using a real-life, either experimental or theoretical, nonlinear dynamical systems [43]. Following this idea, subsequent research works demonstrated computational reservoirs based on spintronic devices [72,73,74], electronic circuits [75,76], photonic and opto-electronic devices [77,78] as well as mechanical [79] and liquid-based [38,80,81,82] physical systems. In the theoretical works, the demonstration of the functionality of a physical reservoir required the researchers to replace Eq. (1) by a differential equation that describes the physical system of interest (e.g., a nonlinear oscillator [36,79]). In turn, in the experimental works, the output of Eq. (1) was effectively replaced by measured traces of temporal evolution of the physical system (e.g., curves of microwave power absorption caused by the excitation of spin waves in a ferromagnetic material [73] or optically-detected profiles of solitary waves developed on the liquid surface [83]). Naturally, compared with the theoretical works on physical RC systems, the experimental demonstration of physical reservoirs required an extra step that included such processing procedures as signal amplification, noise filtering and data sampling [83]. However, both theoretical and experimental works unanimously demonstrated substantial advantages of the physical RC systems over the algorithmic ones, including low poser consumption and high accuracy of forecasts while using a reservoir with a relatively small number of artificial neurons [38].

4. Reservoir Computing Using the Physical Properties of Fluids

4.1. State-of-the-Art

One of the first experimental demonstrations of a physical reservoir computer was a system that exploited the ripples on the surface of water contained in a tank placed on an electric motor-driven platform [84] (for a review see, e.g., Refs. [23,38,47]). The ripples caused by judiciously controlled vibrations were recorded using a digital camera. Then, the images were processed on a computer and then the processed data were used as the input of the reservoir computing algorithm.
Although that pioneering work clearly demonstrated the potential of liquids to perform RC calculations, for about two decades it was considered to be mostly of fundamental interest to researchers working on physics-inspired AI systems [47,48,85]. A similar idea was exploited in the recent theoretical [25,82] and experimental works [83], where it was suggested that the ripples created by motors can be replaced by the dynamical properties of solitary waves—nonlinear, self-reinforcing and localised wave packets that can move for long distances without changing their shape [86].
Importantly, those novel works demonstrated the ability of water-based RC system to do complex calculations relying only on low computational power microcontrollers (e.g., Arduino models [87]) that can, in principle, continuously operate for several months without the need to recharge the battery. Therefore, such systems are especially attractive for application in drones, robots and other autonomous platforms.

4.2. Towards Reservoir Computing with Water Waves Created by an ROV

Figure 3a shows the photographs of an experimental ROV designed to test different kinds of autonomous AI systems. The hull of ROV consists of two wing-like fins that hold two electric motors that rotate the propellers: one in the counterclockwise direction and another one in the clockwise direction [90]. The fins are connected to a cylindrical frame that is also used as a compartment where sensing equipment, including lasers and photodetectors, can be placed. As with many commercial ROV designs [91], the movement of the experimental ROV is controlled via an electric cable.
When both propellers rotate at the same speed, the ROV moves forward (Figure 3b). However, the ROV will turn left (right) when the speed of the left (right) motor is reduced. Thus, a moving ROV creates a wake wave whose pattern depends on the relative speed of the two motors. The temporal dynamics of the wave pattern can be sensed using two laser-photodiode pairs that measures the light reflected from the waves created by the left and right propellers (a similar detection mechanism was used in Ref. [87]). The resulting optical signals can then be converted in electric signals that are transmitted to the control unit via the cable. The waveform of the so-generated electric signals will be a function of time and it will correlate with the commands (left, right, forward and so on) given to the ROV by the operator. Indeed, as schematically shown in Figure 3b, when the ROV moves forward the waves created by both propellers are approximately the same. However, the wave patterns change when the ROV turns left or right.
Similarly to the seminar work Ref. [84], the differences in the wave pattern of the moving ROV can be employed to perform reservoir computations. Such computations can help predict an optimal trajectory of the ROV based on the previous inputs of the human operator and taking into account the environmental factors, including the speed of wind, temperature and salinity of water as well as the presence of obstacles such as rocks and debris [92,93]. Since the ROV creates the waves that are used to predict the trajectory, the on-board reservoir computer does not consume significant energy apart from the need to power the sensors, thereby satisfying the requirements for autonomous AI systems [34].

4.3. Physical Reservoir Computing Using Fluid Flow Disturbances

The research work involving the ROV shown in Figure 3 is still in early development. However, at the time of writing, to the best of our knowledge it is the only experimental attempt to build a prototype of a water drone that employs physical processes in its environment as a means of computation (a UGV designed using similar principles is discussed in Section 5). Nevertheless, similar ideas have been explored theoretically and also tested in laboratory settings [36,83,87,88,89,94]. Although the results presented in the cited papers do not involve any autonomous vehicle, in the following we will analyse them in the context of the ROV model discussed above.
Before discussing those results, we also highlight the proposals of physical RC systems designed to predict the trajectory of a flying drone [93,95]. Although the idea of using physical processes taking place in the surrounding environment of the drone were not expressed in Ref. [93,95], the approach presented there can be extended to implement the concepts illustrated in Figure 1a, b. In this context, we remind that the discipline of fluid dynamics is concerned with the flow of both liquids and gases [96]. This means that AI systems designed for flying and underwater drones can, in principle, exploit the same physical processes. In fact, as mentioned above, both moving airborne and underwater vehicles create vortices as they move in the atmosphere [97,98,99] and water [21].
Vortices are ubiquitous in nature (e.g., they can be observed in whirlpools, smoke rings and winds surrounding tropical cyclones and tornados) and they exhibit interesting nonlinear dynamical that has been the subject of fundamental and applied research [100,101,102,103]. For instance, it is well-established that when fluid flows around a cylindrical object, the physical effects known as von Kármán vortex street can be observed in wide a range of flow velocities [100,102]. The shedding of such vortices imparts a periodic force on the object. In many situations, this force is not significant enough to accelerate the object. However, in some practical cases the object can vibrate about a fixed position, undergoing harmonic motion. Yet, when the frequency of the periodic driving force matches the natural frequency of the oscillation of the object, the resonance processes come into play and the amplitude of the oscillations can increase dramatically [101,103].
Therefore, it has been theoretically demonstrated that the physical properties of vortices can be used in reservoir computing [88,94]. In Ref. [88], the authors conducted a rigorous numerical analysis of a vortex-based RC system based on a von Kármán vortex street (Figure 4a). A periodic pattern of numerical sensors located across the computational domain was used to monitor the nonlinear dynamics and collect data for further processing following the traditional RC algorithm. The flow of fluid was used as the input. For example, to create an input that corresponds to a signal that varies in time, the velocity of the flow was modulated such that it follows the time-varying shape of the signal (which can readily be done using an electric pump [83,87]).
The dynamics of the resulting computational reservoir depends on the value of the Reynolds number R e . Subsequently, different operating regimes were tested using inputs corresponding to the values of R e below and above the threshold of the formation of a von Kármán vortex street [88]. Those tests confirmed a high memory capacity of the reservoir and its ability to learn from input data and generalise them. Further tests also revealed the ability of the reservoir to make accurate predictions of time series datasets.
Figure 4. (a) Illustration of the vortex shedding taking place when a fluid such as air or water flows past a cylinder. As theoretically shown in Ref. [88], modulating the flow velocity and monitoring the vortex dynamics using a set of virtual sensors one can create an efficient physical RC system. An experimental implementation of this computational approach was discussed in Ref. [89]. (b) Photograph of vortices and other water flow effects created by the ROV in a lab setting.
Figure 4. (a) Illustration of the vortex shedding taking place when a fluid such as air or water flows past a cylinder. As theoretically shown in Ref. [88], modulating the flow velocity and monitoring the vortex dynamics using a set of virtual sensors one can create an efficient physical RC system. An experimental implementation of this computational approach was discussed in Ref. [89]. (b) Photograph of vortices and other water flow effects created by the ROV in a lab setting.
Preprints 109463 g004
We note that in an experimental attempt to test the theoretical vortex-based RC system [89] the virtual sensors shown in Figure 4 can be replaced by real ones. Alternatively, one can use a digital camera to film the vortex and then process different pixels of the individual frames extracted from the video file [82,83,87,89]. It is also noteworthy that propellers of the ROV can also create vortices [21,104] that can be used for reservoir computing purposes (Figure 4b).

4.4. Acoustic-Based Reservoir Computing

Another promising approach to physical reservoir computing employs acoustic waves, vibrations and adjacent physical processes [105,106,107]. Although high-frequency (MHz-range) acoustic waves have been used in the cited papers, the ideas presented in those works can be implemented using the acoustic phenomena observed in a wide range of frequencies. For example, the temporal dynamics of vortices and other disturbances created by ROVs (Figure 4b) has a spectral signature in the frequency range that spans from several tens of Hz to several hundreds of kHz. Such acoustic signals can be detected using hydrophones, sonar technologies and other well-established acoustic location techniques [108,109,110,111] and then processed using an RC algorithm [112].
As with the sound radiated by a moving ship [113], ROVs and similar autonomous vehicles can produce a tonal (related to the blade pass frequency) acoustic disturbance and broadband noise associated with the presence of unsteadiness in the flow. The tonal disturbances can be further categorised into the contributions related to such technical parameters of the propellers as the blade thickness parameter and blade loading [113]. These acoustic processes also exhibit significant nonlinear effects [114] that can be exploited in a computational reservoir [38,47,48].
Moreover, as the propeller rotates it pushes the ROV through the water, causing a positive pressure on the face of the blade and a negative pressure on its back. The negative pressure causes any gas in solution in the water to evolve into bubbles [115,116,117,118,119]. These bubbles collapse via the process called cavitation, causing hammer-like impact loads on the blades and damaging their surface [117,118,119]. The cavitation also causes significant acoustic noise that originates both from oscillations and collapse of bubbles [117] and the formation of vortices [20,92]. This physical picture is sketched in Figure 5.
In particular, the underwater acoustic noise is associated with the highly-nonlinear oscillation of the bubble volumes that typically occur in the range of frequencies from several hundred Hz to approximately 40 kHz [116,117,120,121] (microscopic bubbles oscillate at higher frequencies [122,123,124]; however, the physics of their acoustic response remains essentially the same). Indeed, considering an idealised scenario of a sinusoidal acoustic pressure wave, when a wave moves through water, its initial waveform changes so that its initial monochromatic spectrum acquires higher harmonic frequencies [121]. The more nonlinear the medium in which the wave propagates, the stronger the enrichment of the spectrum with the peaks corresponding to harmonics.
The degree of acoustic nonlinearity can be characterised by the acoustic parameter β = B / A , which is the ratio of coefficients B and A of quadratic and linear terms in the Taylor series expansion of the equation of state of the medium (see [121] and references therein). The larger the value of β , the more nonlinear the medium, the stronger a distortion of the acoustic spectrum from the initial monochromatic state. For instance, water with β = 3.5 is more acoustically nonlinear than air with β = 0.7 (we note that the degree of nonlinearity is considered to be moderate in both media).
However, when air bubbles are present in water, the value of β increases to around 5000 [121]. This is because liquids are dense and have little free space between molecules, which explains their low compressibility. In contrast, gases are easily compressible. When an acoustic pressure wave propagating in water reaches a bubble, due to the high compressibility of the gas trapped in it, its volume changes dramatically, causing substantial local acoustic wavefront deformations that result in strong variation of the initial acoustic spectrum.
The evolution of the acoustic spectrum of an oscillating bubble trapped in the bulk of water is illustrated in Figure 6 (for the computational details and model parameters see, e.g., [118,119,123,125]). The units of the x-axis of this figure correspond to the normalised frequency f / f a , where f a is the frequency of the incident sinusoidal acoustic pressure waves. The y-axis corresponds to the peak pressure of the incident wave (in kPa units) but the false colour encodes the amplitude (in dB units) of the acoustic pressure scattered by the bubbles. The bright traces with the amplitude of approximately 0 dB correspond to the frequency peaks in the spectra of the bubble forced at one particular value of the peak pressure of the incident wave. We can see that at a relatively low pressure of the incident wave the spectrum contains the frequency peaks at the normalised frequencies f / f a = 1 , 2 , 3 and so forth. However, an increase in the pressure results in the generation of the subharmonic frequency f / f a = 1 2 and its ultraharmonic components f / f a = 3 2 , 5 2 and so on. Further increase in the peak pressure of the incident waves leads to a cascaded generation of subharmonics frequency peaks, resulting in a comb-like spectrum [119,125].
The nonlinear response of a cluster of oscillating bubbles trapped in water was used to create a computational reservoir in theoretical work Ref. [36]. Each bubble in the cluster oscillates at a certain frequency when the entire cluster is irradiated with an acoustic wave. The oscillation frequency of each bubble in the cluster depends on the equilibrium radius of the bubble and the strength of its interaction with the other bubbles in the cluster. Importantly, the cluster maintains its structural stability (i.e. the bubble do not merge) when the pressure of the incident acoustic wave remains relatively low [125].
Thus, when the temporal profile of the incident pressure wave is modulated to encode the input data (e.g., a time series that needs to be learnt and then forecast by the RC system), the cluster of bubbles acts as a network of artificial neurons. Since each oscillating bubble emits sound waves, these waves can be detected using either a hydrophone or a laser beam [118,119] and then process following to the traditional RC algorithm. As demonstrated in Ref. [36], this computational procedure enables the RC system to predict the future evolution of highly nonlinear and complex time series with the efficiency of the traditional RC algorithm while using low energy consuming computational resources.
Subsequently, it is plausible that bubbles created by a moving ROV (Figure 5) can be used to construct an onboard RC system. The input of such a reservoir will be the local pressure variations caused by the propellers of the ROV. It can be shown that these variations correlate with the control signals received by the ROV from the operator and/or its onboard control unit [127,128]. While the implementation of such a computational scheme requires resolving several technological problems, it enables the researchers to access a rich spectrum of fascinating nonlinear effects associated with oscillating bubbles [117,121,123,124,129,130]. Interestingly enough, as demonstrated in the following section, a similar, from the point of view of nonlinear dynamics, idea was proposed in the domain of autonomous ground vehicles.

5. Physical Reservoir Computing for UGVs

The preceding discussion of the fluid flow disturbances and their application in reservoir computing should also be applicable to cars [131,132], bicycles [133] and other road vehicles [134] that create turbulence and give rise to other physical effects that can be used in reservoir computing. However, the physical contact of road vehicles with the ground often results in unique nonlinear dynamical processes that can be employed in a physical reservoir computer, as schematically illustrated in Figure 1c.
To date, different AI-based approaches to terrain identification have been developed since this functionality is essential for autonomous vehicles and robots that operate in extreme and unstructured environments [135,136,137,138]. Identifying the terrain surface and perceiving its texture, UGVs and robots can dynamically adjust their initial trajectory, achieving safer and more efficient navigation with the help of neural network models. Similarly to flying and underwater drones, UGVs and robots must be able to perceive the surrounding environment with high accuracy, consuming low power and using a low computational onboard computer that can process data in a real-time regime.
A range of physical sensing systems have been developed to fulfil the aforementioned technical specifications. For example, computer vision techniques have been used to enable robots to recognise different terrain textures from a longer distance to modify the route and avoid obstacles [139,140]. LIDAR (Light Detection and Ranging) and hybrid optical-machine vision systems have also been proposed in Refs. [141,142]. Nevertheless, despite the encouraging results demonstrated in those works, the accuracy of terrain monitoring enabled by visual methods is adversely affected by variations in light intensity, limited visibility conditions caused, for example, by smoke, and weather-related factors such as rain, snow, fog and fallen leaves.
Yet, the accuracy of many optical sensing methods deteriorates due to vibrations. Of course, one can use different signal processing techniques to mitigate the adverse effect of vibrations [143]. However, the implementation of such approaches increases the complexity of software that controls the vehicle, which, in turn, requires a more powerful onboard computer that consumes the energy stored in the vehicle’s batteries.
On the other hand, it has been suggested that acoustic effects [137] and vibrations [144] can be used as a means of terrain classification. Indeed, it is plausible to assume that vibrations of certain structural elements of a vehicle will vibrate following the roughness of the terrain, thereby providing a valuable information from which the structure of the terrain can be deduced. However, the extraction of vibration profiles and integration of those data with the existing machine learning techniques have proven technically difficult and computationally demanding [138,145].
An innovative approach to the solution of this problem has been proposed in Ref. [126], where it was demonstrated that the nonlinear dynamical response of the vehicle’s mechanical features can be used as the means of computations. That is, instead of trying to either mitigate the effect of vibration or process vibration-induced signals using an onboard AI system, it was proposed that vibrations could serve as the input of a reservoir computer (Figure 7a).
Previous research demonstrated that a structural part of a soft robot can be used as a computational reservoir [146]. Following that idea, in Ref. [126] used a whisker sensor that mimics the tactile sensing capabilities displayed by various living organisms, particularly insects and mammals [147]. The particular whisker sensor geometry used in Ref. [126] is a tapered spring (Figure 7a,b) that vibrates as the vehicle moves through a rough terrain, producing nonlinear signals that are detected using three Hall sensors and permanent magnets (Figure 7b,c). The so-measured signals are processes using a low-power consuming onboard microcontroller that implements the basic reservoir computing functionality (Figure 7c). Identifying the texture of the terrain, the microcontroller steers the vehicle in the desired direction, varying the electric power delivered to the motors. Essentially, this control scheme is similar to the ideas expressed in the preceding sections of this articles: onboard sensors of a moving vehicle detect nonlinear-dynamical processes occurring in the outer environment and the signals produced by them are employed as a means of reservoir computation aimed to identify an optimal route as well as to perform other navigation and energy consumption optimisation functions.
Importantly, it has been demonstrated that the nonlinear dynamics of the whiskered sensor can be captured using just three sensors places along the tapered spring (Figure 7b). The optimal location of the sensors was computed based on the information about the vibrational modes of the spring obtained using a theoretical model and rigorous numerical simulations (Figure 7d). It was shown that the sensors located closer to the base of the tapered spring follows the vibration caused by the rough terrain. However, the second and third sensors located in the middle of the spring and close to its tip, respectively, mostly detect the higher-order vibrational modes. Such a sensor arrangement was found to be optimal for the construction of an efficient computational reservoir. Interestingly, this observation agrees with the previous theoretical demonstration of the RC system based on nonlinear oscillations of bubbles trapped in water [36], where it sufficed to take into account the fundamental mode and a few higher-order nonlinear acoustic oscillation modes of the bubble to perform complex forecasting tasks.

6. Adjacent Technologies for Onboard Reservoir Computations

Thus, mechanical whiskered sensors and adjacent sensor technologies open up exciting opportunities for the detection of nonlinear variations in the environment and their application in reservoir computing. In fact, different kinds of whiskered sensor—optical, magnetic, resistive, capacitive and piezoelectric—have also been used in underwater ROVs [147], which means that not only UGVs but the other types of autonomous vehicles can employ the physics of whiskered sensors for computational purposes.
However, every technology has technical and fundamental limitations. For example, magnetic whiskered sensor can be sensitive to temperature fluctuations, which may renders them inefficient in certain practical situations [147]. Moreover, from the fundamental point of view, the operation of a whiskered sensor requires the application of relatively strong magnetic fields [148]. Subsequently, strictly speaking, such sensors are not suitable for the detection of small magnetic field variations in the surrounding environment.
Magnetic phenomena have been widely used in the domain of reservoir computing, where a number of magnonic [73,149] and spintronic [150,151,152] RC systems have been demonstrated. Therefore, it is conceivable that a sensor located onboard of an autonomous vehicle can take a series of measurements to determine the magnetic field surrounding the vehicle, which can then be compared to the already known magnetic field maps to provide valuable information about the vehicle’s location and serve as a means for reservoir computing.
While certain ultra-sensitive measurement techniques developed in the field of microwave magnetism [153,154] could be used to implementation this idea in practice, mostly quantum sensors that can measure small magnetic field variations with high accuracy [155] have the potential to serve as an onboard quantum reservoir computer (Figure 1d). Indeed, a number of onboard quantum sensors have been developed and tested both in lab and real-life environments [156,157,158]. Therefore, in the following section, we discuss the emergent quantum reservoir computing architectures that can be used to conduct complex real-time calculations onboard of autonomous vehicles.

7. Quantum Reservoir Computing

Developing a functional quantum computer presents numerous challenges, primarily due to the fragile nature of quantum bits (qubits). Qubits are highly susceptible to decoherence and various forms of quantum noise, which complicates the engineering of a high-fidelity processor capable of executing quantum algorithms across an exponentially large computational space [159,160]. Subsequently, robust mechanisms for precise error correction are essential for maintaining computational accuracy. Yet, these mechanisms must be suitable for applications at extremely low temperatures.
To resolve this problem, alternative avenues for harnessing the power of quantum physics for computational tasks have been explored. Quantum reservoir computing (QRC) is one of these approaches [161]. QRC systems mimic the computational capabilities of the human brain by using quantum dynamics, which is an innovative approach that holds a tremendous potential to advance the research on quantum AI and neuromorphic computing [30]. One particular advantage of QRC is low training cost and fast learning capabilities, which makes it especially efficient for processing complex data [162].
Essentially, QRC is a quantum dynamical system whose quantum states are represented by the density matrices ρ k T ( H ) , where T ( H ) denotes the space of Hermitian, positive semi-definite, trace-one operators. Since quantum dynamics is inherently linear, the achievement of nonlinearity in QRCs requires one to careful choose the quantum observable that can produce a nonlinear input-output transformation. An efficient computational reservoir should also fulfil the echo state and fading memory conditions [42,43,47,48], which implies that its temporal dynamics must be dissipative and independent of the initial conditions [163,164].
In QRC, Fock states of the quantum system serve as the neural activations of the reservoir [52,162,165]. Created by a tensor product of quantum subsystems, Fock states enable a unique way to model the activity of neurons. In such a quantum system, external signals prompt the quantum states to evolve dynamically over time. The evolution of these quantum states is governed by a unitary operator that is determined by the Hamiltonian of the system, a fundamental concept in quantum physics that describes the total energy of the system and dictates how the quantum states change over time [166,167]. As the system evolves, it processes the input signals through its inherent quantum dynamics.
The output from each neuron (i.e. the Fock state) is then collected and processed using a linear function. This function aggregates the contributions from individual neurons, providing a combined output with different weights that represent the response of the system to the input signals. The weights associated with each neuron’s output are then trained using linear regression techniques, a statistical method that adjusts the weights to minimise the error between the predicted and actual outputs. By training the weights through linear regression, the QRC network can be fine-tuned to improve its ability to process and interpret input signals, leading to more accurate and efficient performance.
A variety of technological platforms were proposed to function as QRCs, including trapped ions [168,169], nuclear magnetic resonance (NMR) in molecules [170], quantum circuits [171,172] and photonic devices [173,174]. More recently, novel QRC systems have emerged, including arrays of Rydberg atoms [175] and Josephson mixers [55]. In the theoretical domain, models involving the quantum master equations have been developed to describe quantum spins with controlled losses, enabling the studies of coherence, scalability and controllability of the system [55,162].
Another major advantage of QRC is a large number of degrees of freedom available in small quantum systems. To optimise the extraction of information from these degrees of freedom, techniques like temporal multiplexing, which improves the performance of QRC models, and spatial multiplexing, which simultaneously uses multiple reservoir layers, have been proposed [176].
In the following subsections, we will survey the main categories of QRC systems, exploring their key characteristics and discussing their computational capabilities.

7.1. Spin-Network Based Reservoir

A spin-network system is a promising platform for implementing QRC algorithms. Indeed, extensive research has delved into the dynamics of spin chain networks, showcasing their potential for complex computational tasks [177,178]. QRC systems utilising a quantum network of randomly coupled spins Refs. [161,179,180,181] have also been demonstrated using a range of quantum technologies, including solid-state systems like quantum dots [182,183], superconducting qubits [184,185] and trapped ions [186,187]. Each of these platforms offers distinct advantages in terms of scalability and coherence time.
In the context of spin-network-based QRC, the reservoir consists of a network of quantum spins organised in a particular topology (Figure 8). The interactions between these spins govern the dynamics of the reservoir. These quantum spins can represent qubits or higher-dimensional quantum systems, providing a versatile computational platform for diverse information processing tasks [180,188]. Furthermore, the network topology and the Hamiltonian parameters can be fine-tuned to optimise the performance of the reservoir for specific applications. For example, by adjusting the strength and nature of the interactions between the spins, one can control the response of the reservoir to input signals and its ability to perform different computational tasks [47,180,188]. Such a tuneability is crucial for tailoring the reservoir to the requirements of various applications, ranging from pattern recognition and time-series prediction to more sophisticated quantum information processing tasks [53].
The concept of spin-network-based QRC system was introduced in Ref. [161], where the reservoir exploited the dynamical properties of a nuclear magnetic resonance spin-ensemble architecture, using a multiplexing technique to enhance its computational capabilities [176]. The so-created QRC system utilised the intrinsic properties of a network of qubits—the fundamental two-level quantum systems that reside in a two-dimensional complex Hilbert space H 2 [167]. The state of a qubit can be described as a linear combination of linearly independent bounded operators acting on H 2 . These operators are formed from the tensor products of basic Pauli operators { I , σ z , σ x , σ y } for each qubit. The state of a single qubit can be visually represented on the Bloch sphere, a three-dimensional space that represents the qubit’s state vector [167].
The dynamics of the reservoir layer is driven by the transverse-field Ising model [189] and it forms a high-dimensional space. The Hamiltonian of the transverse-field Ising model typically includes the terms that represent the interaction between neighbouring qubits and an external transverse magnetic field, and it is given by
H = J i j σ i x σ j x + h i σ i z ,
where N is the number of qubits, h is an external magnetic field and J i j denotes the coupling strength between the qubits i and j. The evolution of such closed quantum systems is described by the time-dependent Schrödinger equation
ρ ( t ) = e i H t ρ ( 0 ) e i H t ,
where ρ is the reservoir state, H is the Hamiltonian of the system given by Eq. 2 and is the reduced Planck constant.
The reservoir employs an amplitude encoding scheme to drive its quantum states. In this approach, the state of each qubit is initialised by a set of input signals s k . The state of the qubit is reset at each time step to
| ψ s k = 1 s k | 0 + s k | 1 ,
represented by the density matrix ρ 1 = | ψ s k ψ s k | . The updated reservoir state ρ k at each time step is given by
ρ k = ρ 1 tr 1 ( ρ k 1 ) ,
where tr 1 denotes the partial trace over the state of the first qubit.
There exist various strategies that can be used to implement the input protocol, including strong local dissipation followed by a quantum rotation gate [190] or projective measurements over the first qubit [191,192]. Subsequently, the system evolves under its natural dynamics for a time interval Δ t to process the information:
ρ k = e i H Δ t ρ k e i H Δ t .
The output of the reservoir is obtained by measuring specific observables of the spin network. These observables are related to certain properties of the system, such as the magnetisation of individual spins and the correlations between spins at different locations within the network. The data obtained from these measurements serve as the output of the reservoir. This output can then be further processed using classical computational techniques such as linear regression approaches [42,43].
It is well-known that quantum systems exhibit physically rich dynamical regimes, involving the effects of localisation or thermalisation. An attempt to address the impact of these dynamical phases on a quantum computational reservoir was made in Refs. [181,193]. In those works, the Hamiltonian Eq. (3) was extended with the aim to investigate the dynamical regimes [181], where the condition for the homogeneous external field would be relaxed and a local magnetic field applied to each spin σ i z . It is also established that variations of the dynamical regimes of the reservoir influence its computational robustness [42,43]. For instance, systems presenting localisation can provide quantum memories at finite temperature [194], also improving the trainability of parameterised quantum Ising chains [195]. Yet, the thermal phase appears to be naturally adapted to the requirements of QRC since the performance of the reservoir increases at the thermalisation transition [181].

7.2. Quantum Oscillator for Reservoir Computing

Errors caused by decoherence and noise remain a significant challenge in the field of quantum information processing [196]. One potential solution of this challenge consists in adapting noise-resilient classical computing modalities to the quantum realm.
For example, the study conducted in Ref. [52] proposed a continuously-variable QRC system based on a single nonlinear oscillator, demonstrating that such a quantum-mechanical system serves as a computational reservoir that outperforms its classical counterparts in both performance and reliability. In particular, the continuous-variable approach was implemented using a single nonlinear oscillator with the Kerr nonlinearity [52] (the state of this system can be described by continuous variables corresponding to its position X and momentum P quadrature). The advantage of using a continuous-variable system lies in its ability to provide a richer set of computational nodes due to the infinite-dimensional Hilbert space associated with these continuous variables, which, in turn, reduces costly repetitions required for accurate measurement of expectation values in discrete-variable quantum machine learning approaches [197,198].
The dynamics of a classical reservoir can be described by the differential equation
a ˙ = i K ( a 2 a * ) κ 2 a i α u ( t ) .
In the case of a quantum reservoir, the Hamiltonian of the Kerr-nonlinear oscillator is
H ^ ( t ) = K a ^ a ^ a ^ a ^ + α u ( t ) ( a ^ + a ^ )
and its evolution is described by the Lindblad master equation [199]
ρ ˙ = i [ H ^ ( t ) , ρ ] + κ D [ a ^ ] ρ .
It has been established that a single-oscillator QRC system outperforms its classical counterpart in the task of sine wave phase estimation, primarily due to the nonlinearity of quantum measurements that is known to provide an intrinsic advantage by effectively performing nonlinear transformations on output data [52] (for a relevant discussion of nonlinear transformation in the context of the traditional reservoir computing see Refs. [87,200,201]). The QRC system also demonstrated smaller average root mean square (RMS) errors and greater reliability, with a lower sensitivity to parameter variations compared with a classical reservoir tasked with similar benchmarking problems. The performance of QRC enhanced with the increase in Hilbert space dimension, suggesting an increase in the number of degrees of freedom and boosting of its computational power [52].
The impact of the size of the Hilbert space dimension on the computational power of QRC was investigated in Ref. [202]. It was established that the use of higher-dimensional quantum oscillators helps improve the performance of the reservoir in both signal processing tasks and memory capacity benchmarking problems. Furthermore, quantum reservoirs demonstrated superior memory retention capabilities, additionally improving their performance at stronger nonlinearity and higher dimensionality [202].
Another implementation of QRC using coherently coupled quantum oscillators was suggested in Ref. [55]. The approach proposed in that work aimed to address certain limitations of the existing quantum neural networks that rely on qubits, including the problem of limited connectivity between the artificial neurons. The authors of Ref. [55] demonstrated that one can use coupled quantum oscillators to achieve a large number of densely connected neurons, thereby obtaining advantage over the traditional qubit-based approaches.
The system investigated in Ref. [55] can be experimentally realised using superconducting circuits with resonators linked via parametric elements such as Josephson mixers or SNAILs (Superconducting Nonlinear Asymmetric Inductive eLements) employed as the source of tuneable nonlinearity [203]. Utilising Fock states as the functional neurons, the theoretical analysis carried out in Ref. [55] revealed that a quantum reservoir with up to 81 neurons based on two coupled oscillators (Figure 9) can achieve a 99% accuracy rate on challenging benchmarking tasks. To put this result into perspective, a similar accuracy was achieved using at least 24 classical oscillators.
The authors of Ref. [55] also investigated the necessary coupling and dissipation conditions for optimal performance of the quantum reservoir proposed by them. They established that high dissipation rates lead to greater errors and reduced memory span for the neural network, which emphasised the need to employ high-quality-factor oscillators for computational problems that require extensive memory. Conversely, stronger coupling between the oscillators resulted in an increase in the reservoir’s performance by enabling more significant data transformations between different basis states, which is crucial for effective learning. The operation in a strong coupling regime also facilitates a larger population of basis state neurons, thereby boosting computational capabilities. Overall, it was also demonstrated that the optimal performance of a quantum reservoir relies on finding a balance between the dissipation rate and coupling strength parameters, which maximises memory retention and data transformation efficiency.
However, despite the substantial progress made in the field of quantum computing, experimental realisation of quantum neural networks capable of handling real-world classification tasks remains elusive. In particular, such tasks require networks with millions of interconnected neurons to efficiently process complex data. However, as already mentioned above, traditional qubit-based approaches face connectivity limitations, making it challenging to achieve the necessary network density. To resolve this problem, it has been suggested that using 10 coupled quantum oscillators could enable the creation of an analogue quantum neural network architectures with billions of neurons [55]. In the following subsection, we discuss alternative strategies aimed to additionally decrease the demand for the high number of artificial neurons in the reservoir.
Figure 9. (a) Sketch of a QRC using coherently coupled oscillators. The blue and yellow circles represent arbitrarily connected neurons of the reservoir. The black connections arrows denote the fixed weights but the red connections correspond to the trained weights. (b) Illustration of a system of coupled quantum oscillators driven at the frequencies ω a , b and with the amplitudes ϵ a , b . Adapted from Ref. [55] under the terms of a Creative Commons Attribution 4.0 International License.
Figure 9. (a) Sketch of a QRC using coherently coupled oscillators. The blue and yellow circles represent arbitrarily connected neurons of the reservoir. The black connections arrows denote the fixed weights but the red connections correspond to the trained weights. (b) Illustration of a system of coupled quantum oscillators driven at the frequencies ω a , b and with the amplitudes ϵ a , b . Adapted from Ref. [55] under the terms of a Creative Commons Attribution 4.0 International License.
Preprints 109463 g009

7.3. Quantum Reservoir with Controlled-Measurement Dynamics

While QRC is a promising platform for quantum information studies, as of time of writing the existing QRC models encounter execution time challenges due to the need for a repeated system preparation and measurement at each time step. To address this problem, improved QRC systems have been proposed relying on repeated quantum non-demolition measurements employed to generate time-series data and reduce the execution time [204]. The system described in Ref. [204] was experimentally implemented using IBM’s quantum superconducting devices, and it demonstrated higher accuracy and smaller execution time compared with conventional QRC methods.
The concept of Temporal Information Processing Capacity was introduced into research practice to evaluate the computational abilities of QRC systems, highlighting the fact that an optimal measurement strength needs to be found to balance the retention and dissipation of information [204]. In the cited paper, the proposed QRC method was applied to a soft robot over 1000 time steps, showcasing its practical utility. Initial experiments on a 120-qubit device were discussed, indicating the method’s potential for scalability.
Moreover, addressing the challenge of integrating quantum measurement while retaining processing memory and handling large Hilbert spaces, the authors of Ref. [205] proposed additional measurement protocols aimed to achieve optimal performance for both memory and forecasting tasks. The three proposed measurement protocols for QRC are:
  • Restarting Protocol (RSP): Repeats the entire experiment for each measurement, maintaining unperturbed dynamics but requiring significant resources.
  • Rewinding Protocol (RWP): Restarts dynamics from a recent past state (washout time τ w o ), optimising resources compared to RSP.
  • Online Protocol (OLP): Uses weak measurements to continuously monitor the system, preserving memory with less back-action but introducing more noise.
The efficiency of these measurement protocols was evaluated by tasking a QRC system to solve a number of standard benchmarking problems. It was established that RSP and RWP require strong measurements to minimise statistical errors, whereas OLP with weak measurements effectively preserves the state of the reservoir while providing sufficient information for time-series processing [205].
Nevertheless, the ambiguity of quantum mechanics and quantum measurements continues to inspire research on the topic of quantum reservoir computing. A novel quantum RC architecture that exploits the dynamics of an atom trapped in a cavity was recently proposed in Ref. [162]. One prominent feature of the proposed system is a coherent driving of the atom at a certain driving rate with the possibility to observe transitions between quantum states and effective “freezing” of the quantum evolution of the system. Frequent observations of the atom eigenstates prevent the system from undergoing significant changes, a phenomenon known as the Zeno effect [206]. Conversely, less frequent probing of the system states enables the system to undergo Rabi oscillations [207,208]. Using these properties, the rate at which the atom is driven becomes optimised, making the quantum dynamics of the reservoir suitable for undertaking diverse classification and prediction tasks. This approach also enables controlling and stabilising quantum states during a computation, which is beneficial for such practical applications as mitigation of decoherence [209], quantum information processing [210], correction of quantum error and stabilisation of quantum states [208,211,212].
The theoretical QRC system proposed in Ref. [162] exploits the dynamics of a probed atom in a cavity (Figure 10). Coherently driving the atom and inducing measurement-controlled quantum evolution, the authors of Ref. [162] demonstrated that the QRC system can exhibit fast and reliable forecasting capabilities using fewer artificial neurons compared with the traditional RC algorithms. The computational tests of the system also showcased its potential to be employed in error-tolerant applications, particularly in conditions of limited computational and energy resources.
The model developed in Ref. [162] is constructed around the interaction of a two-level atom with a quantised electromagnetic field in a cavity, enabling real-time observations and control of the quantum state of the atom. The interaction between the atom and the cavity is governed by the Hamiltonian
H ^ i = g a a σ σ + ,
where g represents the strength of the atom-cavity coupling, a is the cavity annihilation operator and σ and σ + are the lowering and raising operators for the atom, respectively. The cavity is coherently driven by an external signal, which is modelled by the Hamiltonian
H ^ c = i β ( a a ) ,
but state of the atom undergoes continuous monitoring by means of coherent measurement processes, which accounted for by the Hamiltonian
H ^ z = g z ( σ + + σ ) ,
where g z denotes the amplitude of the coherent atomic driving that controls the frequency at which the state of the atom is measured. The dynamics of the entire system are described by the stochastic master equation
ρ ˙ = i [ H ^ , ρ ] + C ^ ρ C ^ 1 2 C ^ C ^ ρ 1 2 ρ C ^ C ^ ,
where ρ is the density matrix and C = κ a is the collapse operator associated with cavity decay.
Figure 10. Sketch of the quantum reservoir computing system employing measurement-controlled dynamics. The input, output and target data are defined similarly to the notation used in Figure 2. The quantum reservoir is constructed using the Fock states | n , σ representing the quantum states within the cavity. Input data points are encoded as signals modulating the driving amplitude β ( t ) . The output of the reservoir is determined by the expectation values P ( n , σ ) of the occupancy of the basis states.
Figure 10. Sketch of the quantum reservoir computing system employing measurement-controlled dynamics. The input, output and target data are defined similarly to the notation used in Figure 2. The quantum reservoir is constructed using the Fock states | n , σ representing the quantum states within the cavity. Input data points are encoded as signals modulating the driving amplitude β ( t ) . The output of the reservoir is determined by the expectation values P ( n , σ ) of the occupancy of the basis states.
Preprints 109463 g010
Incorporating these components and equations, the model of the proposed QRC system revealed the system’s capability to efficiently process input data, adapt its dynamics through coherent driving and measurement and make accurate predictions while using low computational and energy resources compared with the traditional RC algorithms.

8. Conclusions and Outlook

Thus, in this article we have reviewed the recent advances in the fields of classical and quantum reservoir computing systems. Although the potential practical applications considered by us in this work are mostly limited to onboard AI systems of autonomous vehicles, the neuromorphic computational reservoirs discussed throughout the main text are expected to find numerous applications in the broad area of science and technology.
Indeed, at present the world of AI has been enabled by well-established technologies that are about to reach their fundamental operation limits and already consume the power comparable with the annual electricity generation by a medium-size country. Subsequently, since the computational power required for sustaining the demand for novel AI units is doubling approximately every three months, humankind will soon face the problem of finding a balance between the progression of AI with the imperatives of sustainable development.
Of course, the problem is not exclusively limited to managing the rise of AI and transitioning to green energy. In fact, since several billions of self-driving car, drones and autonomous robotic systems are estimated to enter service by 2050, each of those vehicles will require a source of energy exclusively dedicated to powering their onboard AI. Subsequently, since no current technology is able to satisfy such a high demand for lightweight and high-capacity energy sources, humans will need to invent alternative, low power consuming computational technologies.
In his famous talk “Plenty of Room at the Bottom” addressed to the American Physical Society in Pasadena on December 1959 [213], Richard P. Feynman suggested exploring the immense possibilities enabled by miniaturisation of devices, thus heralding the coming era of nanotechnologies and quantum computing. A large section of this present review article has been dedicated to computational approaches that directly benefit from the suggestions made by Feynman in that talk. Yet, standing on the shoulders of giants and rephrasing Feynman, in this article we invite researchers to enter a new field of AI, pointing out that that there is plenty room and rich physics around autonomous vehicles to employ them as a means of energy-efficient computing.

Author Contributions

The idea to write this review article belongs to all authors. I.S.M. wrote Section 1, Section 2, Section 3, Section 5 and Section 6. Section 4 was written by H.A.G. and I.S.M. Section 7 was written by A.H.A. All authors discussed the manuscript and approved the submitted version of the paper.

Funding

This research received no external funding

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

This article has no additional data.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AI artificial intelligence
LIDAR Light Detection and Ranging
NMR nuclear magnetic resonance
OLP online protocol
QNP quantum neuromorphic processor
QRC quantum reservoir computing
RC reservoir computing
ROV remotely operated vehicle
RSP restarting protocol
RWP rewinding protocol
SNAILs Superconducting Nonlinear Asymmetric Inductive eLements
UAV unmanned aerial vehicle
UGV unmanned ground vehicles

References

  1. Boylen, M.J. The Drone Age: How Drone Technology Will Change War and Peace; Oxford University Press: New York, 2020. [Google Scholar]
  2. Davies, S.; Pettersson, T.; Öberg, M. Organized violence 1989–2021 and drone warfare. J. Peace Res. 2022, 59, 593–610. [Google Scholar] [CrossRef]
  3. Kunertova, D. The war in Ukraine shows the game-changing effect of drones depends on the game. Bull. At. Sci. 2023, 79, 95–102. [Google Scholar] [CrossRef]
  4. Giannaros, A.; Karras, A.; Theodorakopoulos, L.; Karras, C.; Kranias, P.; Schizas, N.; Kalogeratos, G.; Tsolis, D. Autonomous vehicles: Sophisticated attacks, safety issues, challenges, open topics, blockchain, and future directions. J. Cybersecur. Priv. 2023, 3, 493–543. [Google Scholar] [CrossRef]
  5. Zhang, Q.; Wallbridge, C.D.; Jones, D.M.; Morgan, P.L. Public perception of autonomous vehicle capability determines judgment of blame and trust in road traffic accidents. Transp. Res. A Policy Pract. 2024, 179, 103887. [Google Scholar] [CrossRef]
  6. de Vries, A. The growing energy footprint of artificial intelligence. Joule 2023, 7, 2191–2194. [Google Scholar] [CrossRef]
  7. Verdecchia, R.; Sallou, J.; Cruz, L. A systematic review of Green AI. WIREs Data Min. Knowl. 2023, 13, e1507. [Google Scholar] [CrossRef]
  8. Takeno, J. Creation of a Conscious Robot: Mirror Image Cognition and Self-Awareness; CRS Press: Boca Raton, 2013. [Google Scholar]
  9. Ghimire, S.; Nguyen-Huy, T.; AL-Musaylh, M.S.; Deo, R.C.; Casillas-Pérez, D.; Salcedo-Sanz, S. A novel approach based on integration of convolutional neural networks and echo state network for daily electricity demand prediction. Energy 2023, 275, 127430. [Google Scholar] [CrossRef]
  10. Rozite, V.; Miller, J.; Oh, S. Why AI and energy are the new power couple. IEA: Paris, 2023; (accessed on 24 April 2024). [Google Scholar]
  11. Faghihian, H.; Sargolzaei, A. Energy Efficiency of Connected Autonomous Vehicles: A Review. Electronics 2023, 12, 4086. [Google Scholar] [CrossRef]
  12. Grant, A. Autonomous Electric Vehicles Will Guzzle Power Instead of Gas. Hyperdrive, Bloomberg 2024. (accessed on 24 April 2024). [Google Scholar]
  13. Zewe, A. Computers that power self-driving cars could be a huge driver of global carbon emissions. MIT News Office 2023. (accessed on 9 May 2024). [Google Scholar]
  14. Othman, K. Exploring the implications of autonomous vehicles: a comprehensive review. Innov. Infrastruct. Solut. 2022, 7, 165. [Google Scholar] [CrossRef]
  15. Rauf, M.; Kumar, L.; Zulkifli, S.A.; Jamil, A. Aspects of artificial intelligence in future electric vehicle technology for sustainable environmental impact. Environ. Challenges 2024, 14, 100854. [Google Scholar] [CrossRef]
  16. Yang, Y.; Sciacchitano, A.; Veldhuis, L.L.M.; Eitelberg, G. Analysis of propeller-induced ground vortices by particle image velocimetry. J. Vis. 2018, 21, 39–55. [Google Scholar] [CrossRef] [PubMed]
  17. Sun, Q.; Shi, Z.; Sun, Z.; Chen, S.; Chen, Y. Characteristics of the shedding vortex around the Coanda surface and its impact on circulation control airfoil performance. Phys. Fluids 2023, 35, 027103. [Google Scholar] [CrossRef]
  18. Yusvika, M.; Prabowo, A.R.; Tjahjana, D.D.D.P.; Sohn, J.M. Cavitation prediction of ship propeller based on temperature and fluid properties of water. J. Mar. Sci. Eng. 2020, 8, 465. [Google Scholar] [CrossRef]
  19. Ju, H.j.; Choi, J.s. Experimental study of cavitation damage to marine propellers based on the rotational speed in the coastal Waters. Machines 2022, 10, 793. [Google Scholar] [CrossRef]
  20. Arndt, R.; Pennings, P.; Bosschers, J.; van Terwisga, T. The singing vortex. Interface Focus 2015, 5, 20150025. [Google Scholar] [CrossRef] [PubMed]
  21. Yu, J.; Zhou, B.; Liu, H.; Han, X.; Hu, G.; Zhang, T. Study of propeller vortex characteristics under loading conditions. Symmetry 2023, 15, 445. [Google Scholar] [CrossRef]
  22. Adamatzky, A. Advances in Unconventional Computing. Volume 2: Prototypes, Models and Algorithms; Springer: Berlin, 2017. [Google Scholar]
  23. Adamatzky, A. A brief history of liquid computers. Philos. Trans. R. Soc. B 2019, 374, 20180372. [Google Scholar] [CrossRef]
  24. Shastri, B.J.; Tait, A.N.; de Lima, T.F.; Pernice, W.H.P.; Bhaskaran, H.; Wright, C.D.; Prucnal, P.R. Photonics for artificial intelligence and neuromorphic computing. Nat. Photon. 2020, 15, 102–114. [Google Scholar] [CrossRef]
  25. Marcucci, G.; Pierangeli, D.; Conti, C. Theory of neuromorphic computing by waves: machine learning by rogue waves, dispersive shocks, and solitons. Phys. Rev. Lett. 2020, 125, 093901. [Google Scholar] [CrossRef] [PubMed]
  26. Marković, D.; Mizrahi, A.; Querlioz, D.; Grollier, J. Physics for neuromorphic computing. Nat. Rev. Phys. 2020, 2, 499–510. [Google Scholar] [CrossRef]
  27. Suárez, L.E.; Richards, B.A.; Lajoie, G.; Misic, B. Learning function from structure in neuromorphic networks. Nat. Mach. Intell. 2021, 3, 771–786. [Google Scholar] [CrossRef]
  28. Rao, A.; Plank, P.; Wild, A.; Maass, W. A long short-term memory for AI applications in spike-based neuromorphic hardware. Nat. Mach. Intell. 2022, 4, 467–479. [Google Scholar] [CrossRef]
  29. Sarkar, T.; Lieberth, K.; Pavlou, A.; Frank, T.; Mailaender, V.; McCulloch, I.; Blom, P.W.M.; Torricelli, F.; Gkoupidenis, P. An organic artificial spiking neuron for in situ neuromorphic sensing and biointerfacing. Nat. Electron. 2022, 5, 774–783. [Google Scholar] [CrossRef]
  30. Schuman, C.D.; Kulkarni, S.R.; Parsa, M.; Mitchell, J.P.; Date, P.; Kay, B. Opportunities for neuromorphic computing algorithms and applications. Nat. Comput. Sci. 2022, 2, 10–19. [Google Scholar] [CrossRef] [PubMed]
  31. Krauhausen, I.; Coen, C.T.; Spolaor, S.; Gkoupidenis, P.; van de Burgt, Y. Brain-inspired organic electronics: Merging neuromorphic computing and bioelectronics using conductive polymers. Adv. Funct. Mater. 2307729. [CrossRef]
  32. Mittal, S. A Survey of Techniques for Approximate Computing. ACM Comput. Surv. 2016, 48. [Google Scholar] [CrossRef]
  33. Liu, W.; Lombardi, F.; Schulte, M. Approximate Computing: From Circuits to Applications. Proceedings of the IEEE 2020, 108, 2103–2107. [Google Scholar] [CrossRef]
  34. Henderson, A.; Yakopcic, C.; Harbour, S.; Taha, T.M. Detection and Classification of Drones Through Acoustic Features Using a Spike-Based Reservoir Computer for Low Power Applications. In Proceedings of the 2022 IEEE/AIAA 41st Digital Avionics Systems Conference (DASC); 2022; pp. 1–7. [Google Scholar] [CrossRef]
  35. Ullah, S.; Kumar, A. Introduction. In Approximate Arithmetic Circuit Architectures for FPGA-based Systems; Springer International Publishing: Cham, Germany, 2023; pp. 1–26. [Google Scholar]
  36. Maksymov, I.S.; Pototsky, A.; Suslov, S.A. Neural echo state network using oscillations of gas bubbles in water. Phys. Rev. E 2021, 105, 044206. [Google Scholar] [CrossRef]
  37. Nomani, T.; Mohsin, M.; Pervaiz, Z.; Shafique, M. xUAVs: Towards efficient approximate computing for UAVs-low power approximate adders with single LUT delay for FPGA-based aerial imaging optimization. IEEE Access 2020, 8, 102982–102996. [Google Scholar] [CrossRef]
  38. Maksymov, I.S. Analogue and physical reservoir computing using water waves: Applications in power engineering and beyond. Energies 2023, 16, 5366. [Google Scholar] [CrossRef]
  39. Adamatzky, A.; Tarabella, G.; Phillips, N.; Chiolerio, A.; D’Angelo, P.; Nicolaidou, A.; Sirakoulis, G.C. Kombucha electronics. arXiv 2023. [Google Scholar]
  40. Sharma, S.; Mahmud, A.; Tarabella, G.; Mougoyannis, P.; Adamatzky, A. Information-theoretic language of proteinoid gels: Boolean gates and QR codes. arXiv 2024. [Google Scholar]
  41. Maass, W.; Natschläger, T.; Markram, H. Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput. 2002, 14, 2531–2560. [Google Scholar] [CrossRef] [PubMed]
  42. Jaeger, H.; Haas, H. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science 2004, 304, 78–80. [Google Scholar] [CrossRef] [PubMed]
  43. Lukoševičius, M.; Jaeger, H. Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 2009, 3, 127–149. [Google Scholar] [CrossRef]
  44. Nakajima, K.; Fisher, I. Reservoir Computing; Springer: Berlin, 2021. [Google Scholar]
  45. Miller, K.; Lohn, A. Onboard AI: Constraints and Limitations. Center for Security and Emerging Technology 2023. (accessed on 24 April 2024). [Google Scholar] [CrossRef]
  46. Okulski, M.; Ławryńczuk, M. A small UAV optimized for efficient long-range and VTOL missions: an experimental tandem-wing quadplane drone. Appl. Sci. 2022, 12, 7059. [Google Scholar] [CrossRef]
  47. Tanaka, G.; Yamane, T.; Héroux, J.B.; Nakane, R.; Kanazawa, N.; Takeda, S.; Numata, H.; Nakano, D.; Hirose, A. Recent advances in physical reservoir computing: A review. Neural Newt. 2019, 115, 100–123. [Google Scholar] [CrossRef]
  48. Nakajima, K. Physical reservoir computing–an introductory perspective. Jpn. J. Appl. Phys. 2020, 59, 060501. [Google Scholar] [CrossRef]
  49. Cucchi, M.; Abreu, S.; Ciccone, G.; Brunner, D.; Kleemann, H. Hands-on reservoir computing: a tutorial for practical implementation. Neuromorph. Comput. Eng. 2022, 2, 032002. [Google Scholar] [CrossRef]
  50. Maksymov, I.S. Quantum-inspired neural network model of optical illusions. Algorithms 2024, 17, 30. [Google Scholar] [CrossRef]
  51. Mujal, P.; Martínez-Peña, R.; Nokkala, J.; García-Beni, J.; Giorgi, G.L.; Soriano, M.C.; Zambrini, R. Opportunities in quantum reservoir computing and extreme learning machines. Adv. Quantum Technol. 2021, 4, 2100027. [Google Scholar] [CrossRef]
  52. Govia, L.C.G.; Ribeill, G.J.; Rowlands, G.E.; Krovi, H.K.; Ohki, T.A. Quantum reservoir computing with a single nonlinear oscillator. Phys. Rev. Res. 2021, 3, 013077. [Google Scholar] [CrossRef]
  53. Suzuki, Y.; Gao, Q.; Pradel, K.C.; Yasuoka, K.; Yamamoto, N. Natural quantum reservoir computing for temporal information processing. Sci. Reps. 2022, 12, 1353. [Google Scholar] [CrossRef] [PubMed]
  54. Govia, L.C.G.; Ribeill, G.J.; Rowlands, G.E.; Ohki, T.A. Nonlinear input transformations are ubiquitous in quantum reservoir computing. Neuromorph. Comput. Eng. 2022, 2, 014008. [Google Scholar] [CrossRef]
  55. Dudas, J.; Carles, B.; Plouet, E.; Mizrahi, F.A.; Grollier, J.; Marković, D. Quantum reservoir computing implementation on coherently coupled quantum oscillators. NPJ Quantum Inf. 2023, 9, 64. [Google Scholar] [CrossRef]
  56. Götting, N.; Lohof, F.; Gies, C. Exploring quantumness in quantum reservoir computing. Phys. Rev. A 2023, 108, 052427. [Google Scholar] [CrossRef]
  57. Llodrà, G.; Charalambous, C.; Giorgi, G.L.; Zambrini, R. Benchmarking the role of particle statistics in quantum reservoir computing. Adv. Quantum Technol. 2023, 6, 2200100. [Google Scholar] [CrossRef]
  58. Čindrak, S.; Donvil, B.; Lüdge, K.; Jaurigue, L. Enhancing the performance of quantum reservoir computing and solving the time-complexity problem by artificial memory restriction. Phys. Rev. Res. 2024, 6, 013051. [Google Scholar] [CrossRef]
  59. Veelenturf, L.P.J. Analysis and Applications of Artificial Neural Networks; Prentice Hall: London, 1995. [Google Scholar]
  60. Haykin, S. Neural Networks: A Comprehensive Foundation; Pearson-Prentice Hall: Singapore, 1998. [Google Scholar]
  61. Galushkin, A.I. Neural Networks Theory; Springer: Berlin, 2007. [Google Scholar]
  62. McKenna, T.M.; McMullen, T.A.; Shlesinger, M.F. The brain as a dynamic physical system. Neuroscience 1994, 60, 587–605. [Google Scholar] [CrossRef] [PubMed]
  63. Korn, H.; Faure, P. Is there chaos in the brain? II. Experimental evidence and related models. C. R. Biol. 2003, 326, 787–840. [Google Scholar] [PubMed]
  64. Marinca, V.; Herisanu, N. Nonlinear Dynamical Systems in Engineering; Springer: Berlin, 2012. [Google Scholar]
  65. Yan, M.; Huang, C.; Bienstman, P.; Tino, P.; Lin, W.; Sun, J. Emerging opportunities and challenges for the future of reservoir computing. Nat. Commun. 2024, 15, 2056. [Google Scholar] [CrossRef] [PubMed]
  66. Jaeger, H. A tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the “echo state network” approach; GMD Report 159; German National Research Center for Information Technology, 2005. [Google Scholar]
  67. Lukoševičius, M. A Practical Guide to Applying Echo State Networks. In Neural Networks: Tricks of the Trade, Reloaded; Montavon, G., Orr, G.B., Müller, K.R., Eds.; Springer: Berlin, 2012; pp. 659–686. [Google Scholar]
  68. Bala, A.; Ismail, I.; Ibrahim, R.; Sait, S.M. Applications of metaheuristics in reservoir computing techniques: A Review. IEEE Access 2018, 6, 58012–58029. [Google Scholar] [CrossRef]
  69. Damicelli, F.; Hilgetag, C.C.; Goulas, A. Brain connectivity meets reservoir computing. PLoS Comput. Biol. 2022, 18, e1010639. [Google Scholar] [CrossRef] [PubMed]
  70. Zhang, H.; Vargas, D.V. A survey on reservoir computing and its interdisciplinary applications beyond traditional machine learning. IEEE Access 2023, 11, 81033–81070. [Google Scholar] [CrossRef]
  71. Lee, O.; Wei, T.; Stenning, K.D.; Gartside, J.C.; Prestwood, D.; Seki, S.; Aqeel, A.; Karube, K.; Kanazawa, N.; Taguchi, Y.; et al. Task-adaptive physical reservoir computing. Nat. Mater. 2023. [Google Scholar] [CrossRef]
  72. Riou, M.; Torrejon, J.; Garitaine, B.; Araujo, F.A.; Bortolotti, P.; Cros, V.; Tsunegi, S.; Yakushiji, K.; Fukushima, A.; Kubota, H.; et al. Temporal pattern recognition with delayed-feedback spin-torque nano-oscillators. Phys. Rep. Appl. 2019, 12, 024049. [Google Scholar] [CrossRef]
  73. Watt, S.; Kostylev, M. Reservoir computing using a spin-wave delay-line active-ring resonator based on yttrium-iron-garnet film. Phys. Rev. Appl. 2020, 13, 034057. [Google Scholar] [CrossRef]
  74. Allwood, D.A.; Ellis, M.O.A.; Griffin, D.; Hayward, T.J.; Manneschi, L.; Musameh, M.F.K.; O’Keefe, S.; Stepney, S.; Swindells, C.; Trefzer, M.A.; et al. A perspective on physical reservoir computing with nanomagnetic devices. Appl. Phys. Lett. 2023, 122, 040501. [Google Scholar] [CrossRef]
  75. Cao, J.; Zhang, X.; Cheng, H.; Qiu, J.; Liu, X.; Wang, M.; Liu, Q. Emerging dynamic memristors for neuromorphic reservoir computing. Nanoscale 2022, 14, 289–298. [Google Scholar] [CrossRef]
  76. Liang, X.; Tang, J.; Zhong, Y.; Gao, B.; Qian, H.; Wu, H. Physical reservoir computing with emerging electronics. Nat. Electron. 2024. [Google Scholar] [CrossRef]
  77. Sorokina, M. Multidimensional fiber echo state network analogue. J. Phys. Photonics 2020, 2, 044006. [Google Scholar] [CrossRef]
  78. Rafayelyan, M.; Dong, J.; Tan, Y.; Krzakala, F.; Gigan, S. Large-scale optical reservoir computing for spatiotemporal chaotic systems prediction. Phys. Rev. X 2020, 10, 041037. [Google Scholar] [CrossRef]
  79. Coulombe, J.C.; York, M.C.A.; Sylvestre, J. Computing with networks of nonlinear mechanical oscillators. PLoS ONE 2017, 12, e0178663. [Google Scholar] [CrossRef]
  80. Kheirabadi, N.R.; Chiolerio, A.; Szaciłowski, K.; Adamatzky, A. Neuromorphic liquids, colloids, and gels: A review. ChemPhysChem 2023, 24, e202200390. [Google Scholar] [CrossRef] [PubMed]
  81. Gao, C.; Gaur, P.; Rubin, S.; Fainman, Y. Thin liquid film as an optical nonlinear-nonlocal medium and memory element in integrated optofluidic reservoir computer. Adv. Photon. 2022, 4, 046005. [Google Scholar] [CrossRef]
  82. Marcucci, G.; Caramazza, P.; Shrivastava, S. A new paradigm of reservoir computing exploiting hydrodynamics. Phys. Fluids 2023, 35, 071703. [Google Scholar] [CrossRef]
  83. Maksymov, I.S.; Pototsky, A. Reservoir computing based on solitary-like waves dynamics of liquid film flows: A proof of concept. EPL 2023, 142, 43001. [Google Scholar] [CrossRef]
  84. Fernando, C.; Sojakka, S. Pattern Recognition in a Bucket. In Proceedings of the Advances in Artificial Life, Berlin; Banzhaf, W., Ziegler, J., Christaller, T., Dittrich, P., Kim, J.T., Eds.; 2003; pp. 588–597. [Google Scholar]
  85. Nakajima, K.; Aoyagi, T. The memory capacity of a physical liquid state machine. IEICE Tech. Rep. 2015, 115, 109–113. [Google Scholar]
  86. Remoissenet, M. Waves Called Solitons: Concepts and Experiments; Springer, 1994. [Google Scholar]
  87. Maksymov, I.S. Physical reservoir computing enabled by solitary waves and biologically inspired nonlinear transformation of input data. Dynamics 2024, 4, 119–134. [Google Scholar] [CrossRef]
  88. Goto, K.; Nakajima, K.; Notsu, H. Twin vortex computer in fluid flow. N. J. Phys. 2021, 23, 063051. [Google Scholar] [CrossRef]
  89. Vincent, T.; Gunasekaran, S.; Mongin, M.; Medina, A.; Pankonien, A.M.; Buskohl, P. Development of an Experimental Testbed to Study Cavity Flow as a Processing Element for Flow Disturbances. In AIAA SCITECH 2024 Forum. [CrossRef]
  90. Aguirre-Castro, O.A.; Inzunza-González, E.; García-Guerrero, E.E.; Tlelo-Cuautle, E.; López-Bonilla, O.R.; Olguín-Tiznado, J.E.; Cárdenas-Valdez, J.R. Design and construction of an ROV for underwater exploration. Sensors 2019, 19, 5387. [Google Scholar] [CrossRef] [PubMed]
  91. Bohm, H. Build Your Own Underwater Robot; Westcoast Words: Vancouver, 1997. [Google Scholar]
  92. Yang, Y.; Xiong, X.; Yan, Y. UAV formation trajectory planning algorithms: A review. Drones 2023, 7, 62. [Google Scholar] [CrossRef]
  93. Perrusquía, A.; Guo, W. Reservoir computing for drone trajectory intent prediction: a physics informed approach. IEEE Trans. Cybern. 2024, 1–10. [Google Scholar] [CrossRef]
  94. Vincent, T.; Nelson, D.; Grossmann, B.; Gillman, A.; Pankonien, A.; Buskohl, P. Open-Cavity Fluid Flow as an Information Processing Medium. In AIAA SCITECH 2023 Forum. [CrossRef]
  95. Vargas, A.; Ireland, M.; Anderson, D. System identification of multirotor UAV’s using echo state networks. In Proceedings of the AUVSI’s Unmanned Systems 2015, Atlanta, GA, USA, 4–7 May 2015. [Google Scholar]
  96. Sears, W.R. Introduction to Theoretical Aerodynamics and Hydrodynamics; Americal Institute of Aeronautics and Asronautics: Reston, 2011. [Google Scholar]
  97. Saban, D.; Whidborne, J.F.; Cooke, A.K. Simulation of wake vortex effects for UAVs in close formation flight. Aeronaut. J. 2009, 113, 727–738. [Google Scholar] [CrossRef]
  98. Hrúz, M.; Pecho, P.; Bugaj, M.; Rostáš, J. Investigation of vortex structure behavior induced by different drag reduction devices in the near field. Transp. Res. Proc. 2022, 65, 318–328. [Google Scholar] [CrossRef]
  99. Nathanael, J.C.; Wang, C.H.J.; Low, K.H. Numerical studies on modeling the near- and far-field wake vortex of a quadrotor in forward flight. Proc. Inst. Mech. Eng. G: J. Aerosp. Eng. 2022, 236, 1166–1183. [Google Scholar] [CrossRef]
  100. Wu, J.Z.; Ma, H.Y.; Zhou, M.D. Vorticity and Vortex Dynamics; Springer: Berlin, 2006. [Google Scholar]
  101. Billah, K.Y.; Scanlan, R.H. Am. J. Phys. 1991, 59, 118–124. [CrossRef]
  102. Williamson, C.H.K. Vortex dynamics in the cylinder wake. Annu. Rev. Fluid Mech. 1996, 28, 477–539. [Google Scholar] [CrossRef]
  103. Williamson, C.H.K.; Govardhan, R. Vortex-induced vibrations. Annu. Rev. Fluid Mech. 2004, 36, 413–445. [Google Scholar] [CrossRef]
  104. Asnaghi, A.; Svennberg, U.; Gustafsson, R.; Bensow, R.E. Propeller tip vortex mitigation by roughness application. Appl. Ocean Res. 2021, 106, 102449. [Google Scholar] [CrossRef]
  105. Meffan, C.; Ijima, T.; Banerjee, A.; Hirotani, J.; Tsuchiya, T. Non-linear processing with a surface acoustic wave reservoir computer. Microsyst. Technol 2023, 29, 1197–1206. [Google Scholar] [CrossRef]
  106. Yaremkevich, D.D.; Scherbakov, A.V.; Clerk, L.D.; Kukhtaruk, S.M.; Nadzeyka, A.; Campion, R.; Rushforth, A.W.; Savel’ev, S.; Balanov, A.G.; Bayer, M. On-chip phonon-magnon reservoir for neuromorphic computing. Nat. Commun. 2023, 14, 8296. [Google Scholar] [CrossRef] [PubMed]
  107. Phang, S. Photonic reservoir computing enabled by stimulated Brillouin scattering. Opt. Express 2023, 31, 22061–22074. [Google Scholar] [CrossRef] [PubMed]
  108. Wilson, D.K.; Liu, L. Finite-Difference, Time-Domain Simulation of Sound Propagation in a Dynamic Atmosphere; US Army Corps of Engineers, Engineer Research and Development Center, 2004. [Google Scholar]
  109. Rubin, W.L. Radar-acoustic detection of aircraft wake vortices. J. Atmos. Ocean. Technol. 1999, 17, 1058–1065. [Google Scholar] [CrossRef]
  110. Manneville, S.; Robres, J.H.; Maurel, A.; Petitjeans, P.; Fink, M. Vortex dynamics investigation using an acoustic technique. Phys. Fluids 1999, 11, 3380–3389. [Google Scholar] [CrossRef]
  111. Digulescu, A.; Murgan, I.; Candel, I.; Bunea, F.; Ciocan, G.; Bucur, D.M.; Dunca, G.; Ioana, C.; Vasile, G.; Serbanescu, A. Cavitating vortex characterization based on acoustic signal detection. IOP Conf. Series: Earth and Environmental Science 2016, 49, 082009. [Google Scholar] [CrossRef]
  112. Onasami, O.; Feng, M.; Xu, H.; Haile, M.; Qian, L. Underwater acoustic communication channel modeling using reservoir computing. IEEE Access 2022, 10, 56550–56563. [Google Scholar] [CrossRef]
  113. Lidtke, A.K.; Turnock, S.R.; Humphrey, V.F. Use of acoustic analogy for marine propeller noise characterisation. In Proceedings of the Fourth International Symposium on Marine Propulsors, Austin, Texas, USA; 2015. [Google Scholar]
  114. Made, J.E.; Kurtz, D.W. A Review of Aerodynamic Noise From Propellers, Rofors, and Liff Fans, Technical Report 32-7462; Jet Propulsion laboratory, California Institute of Technology: Pasadena, California, USA, 1970. [Google Scholar]
  115. Plesset, M.S. The dynamics of cavitation bubbles. J. Appl. Mech. 1949, 16, 228–231. [Google Scholar] [CrossRef]
  116. Brennen, C.E. Cavitation and Bubble Dynamics; Oxford University Press: New York, 1995. [Google Scholar]
  117. Lauterborn, W.; Kurz, T. Physics of bubble oscillations. Rep. Prog. Phys. 2010, 73, 106501. [Google Scholar] [CrossRef]
  118. Maksymov, I.S.; Nguyen, B.Q.H.; Suslov, S.A. Biomechanical sensing using gas bubbles oscillations in liquids and adjacent technologies: Theory and practical applications. Biosensors 2022, 12, 624. [Google Scholar] [CrossRef] [PubMed]
  119. Maksymov, I.S.; Nguyen, B.Q.H.; Pototsky, A.; Suslov, S.A. Acoustic, phononic, Brillouin light scattering and Faraday wave-based frequency combs: physical foundations and applications. Sensors 2022, 22, 3921. [Google Scholar] [CrossRef] [PubMed]
  120. Lauterborn, W.; Mettin, R. Nonlinear Bubble Dynamics. In Sonochemistry and Sonoluminescence; Crum, L.A., Mason, T.J., Reisse, J.L., Suslick, K.S., Eds.; Springer Netherlands: Dordrecht, 1999; pp. 63–72. [Google Scholar]
  121. Maksymov, I.S.; Greentree, A.D. Coupling light and sound: giant nonlinearities from oscillating bubbles and droplets. Nanophotonics 2019, 8, 367–390. [Google Scholar] [CrossRef]
  122. Chen, C.; Zhu, Y.; Leech, P.W.; Manasseh, R. Production of monodispersed micron-sized bubbles at high rates in a microfluidic device. Appl. Phys. Lett. 2009, 95, 144101. [Google Scholar] [CrossRef]
  123. Suslov, S.A.; Ooi, A.; Manasseh, R. Nonlinear dynamic behavior of microscopic bubbles near a rigid wall. Phys. Rev. E 2012, 85, 066309. [Google Scholar] [CrossRef]
  124. Dzaharudin, F.; Suslov, S.A.; Manasseh, R.; Ooi, A. Effects of coupling, bubble size, and spatial arrangement on chaotic dynamics of microbubble cluster in ultrasonic fields. J. Acoust. Soc. Am. 2013, 134, 3425–3434. [Google Scholar] [CrossRef] [PubMed]
  125. Nguyen, B.Q.H.; Maksymov, I.S.; Suslov, S.A. Spectrally wide acoustic frequency combs generated using oscillations of polydisperse gas bubble clusters in liquids. Phys. Rev. E 2021, 104, 035104. [Google Scholar] [CrossRef]
  126. Yu, Z.; Sadati, S.M.H.; Perera, S.; Hauser, H.; Childs, P.R.N.; Nanayakkara, T. Tapered whisker reservoir computing for real-time terrain identification-based navigation. Sci. Rep. 2023, 13, 5213. [Google Scholar] [CrossRef]
  127. Patterson, A.; Schiller, N.H.; Ackerman, K.A.; Gahlawat, A.; Gregory, I.M.; Hovakimyan, N. Controller Design for Propeller Phase Synchronization with Aeroacoustic Performance Metrics. In AIAA Scitech 2020 Forum; 2020. [Google Scholar] [CrossRef]
  128. Su, P.; Chang, G.; Wu, J.; Wang, Y.; Feng, X. Design and experimental study of an embedded controller for a model-based controllable pitch propeller. Appl. Sci. 2024, 14, 3993. [Google Scholar] [CrossRef]
  129. Prosperetti, A. Nonlinear oscillations of gas bubbles in liquids: steady-state solutions. J. Acoust. Soc. Am. 1974, 56, 878–885. [Google Scholar] [CrossRef]
  130. Keller, J.B.; Miksis, M. Bubble oscillations of large amplitude. J. Acoust. Soc. Am. 1980, 68, 628–633. [Google Scholar] [CrossRef]
  131. Paul, A.R.; Jain, A.; Alam, F. Drag reduction of a passenger car using flow control techniques. Int. J. Automot. Technol. 2019, 20, 397–410. [Google Scholar] [CrossRef]
  132. Nakamura, Y.; Nakashima, T.; Yan, C.; Shimizu, K.; Hiraoka, T.; Mutsuda, H.; Kanehira, T.; Nouzawa, T. Identification of wake vortices in a simplified car model during significant aerodynamic drag increase under crosswind conditions. J. Vis. 2022, 25, 983–997. [Google Scholar] [CrossRef]
  133. Miau, J.J.; Li, S.R.; Tsai, Z.X.; Phung, M.V.; Lin, S.Y. On the aerodynamic flow around a cyclist model at the hoods position. J. Vis. 2020, 23, 35–47. [Google Scholar] [CrossRef]
  134. Garcia-Ribeiro, D.; Bravo-Mosquera, P.D.; Ayala-Zuluaga, J.A.; Martinez-Castañeda, D.F.; Valbuena-Aguilera, J.S.; Cerón-Muñoz, H.D.; Vaca-Rios, J.J. Drag reduction of a commercial bus with add-on aerodynamic devices. Proc. Inst. Mech. Eng. Pt. D J. Automobile Eng. 2023, 237, 1623–1636. [Google Scholar] [CrossRef]
  135. Trautmann, E.; Ray, L. Mobility characterization for autonomous mobile robots using machine learning. Auton. Robot. 2011, 30, 369–383. [Google Scholar] [CrossRef]
  136. Otsu, K.; Ono, M.; Fuchs, T.J.; Baldwin, I.; Kubota, T. Autonomous terrain classification with co- and self-training approach. IEEE Robot. Autom. Lett 2016, 814–819. [Google Scholar] [CrossRef]
  137. Christie, J.; Kottege, N. Acoustics based terrain classification for legged robots. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA); 2016; pp. 3596–3603. [Google Scholar] [CrossRef]
  138. Valada, A.; Burgard, W. Deep spatiotemporal models for robust proprioceptive terrain classification. Int. J. Robot. Res. 2017, 36, 1521–1539. [Google Scholar] [CrossRef]
  139. Santana, P.; Guedes, M.; Correia, L.; Barata, J. Stereo-based all-terrain obstacle detection using visual saliency. J. Field Robot. 2011, 28, 241–263. [Google Scholar] [CrossRef]
  140. Nava, M.; Guzzi, J.; Chavez-Garcia, R.O.; Gambardella, L.M.; Giusti, A. Learning long-range perception using self-supervision from short-range sensors and odometry. IEEE Robot. Autom. Lett 2019, 1279–1286. [Google Scholar] [CrossRef]
  141. Konolige, K.; Agrawal, M.; Blas, M.R.; Bolles, R.C.; Gerkey, B.; Solà, J.; Sundaresan, A. Mapping, navigation, and learning for off-road traversal. J. Field Robot. 2009, 26, 88–113. [Google Scholar] [CrossRef]
  142. Zhou, S.; Xi, J.; McDaniel, M.W.; Nishihata, T.; Salesses, P.; Iagnemma, K. Self-supervised learning to visually detect terrain surfaces for autonomous robots operating in forested terrain. J. Field Robot. 2012, 29, 277–297. [Google Scholar] [CrossRef]
  143. Engelsman, D.; Klein, I. Data-driven denoising of stationary accelerometer signals. Measurement 2023, 218, 113218. [Google Scholar] [CrossRef]
  144. Brooks, C.; Iagnemma, K. Vibration-based terrain classification for planetary exploration rovers. IEEE Trans. Robot. 2005, 21, 1185–1191. [Google Scholar] [CrossRef]
  145. Giguere, P.; Dudek, G. A simple tactile probe for surface identification by mobile robots. IEEE Trans. Robot. 2011, 27, 534–544. [Google Scholar] [CrossRef]
  146. Nakajima, K.; Hauser, H.; Kang, R.; Guglielmino, E.; Caldwell, D.; Pfeifer, R. A soft body as a reservoir: case studies in a dynamic model of octopus-inspired soft robotic arm. Front. Comput. Neurosci. 2013, 7. [Google Scholar] [CrossRef] [PubMed]
  147. Wang, S.; Liu, J.; Liu, B.; Wang, H.; Si, J.; Xu, P.; Xu, M. Potential applications of whisker sensors in marine science and engineering: A review. J. Mar. Sci. Eng. 2023, 11, 2108. [Google Scholar] [CrossRef]
  148. Trigona, C.; Sinatra, V.; Fallico, A.R.; Puglisi, S.; Andò, B.; Baglio, S. Dynamic Spatial Measurements based on a Bimorph Artificial Whisker and RTD-Fluxgate Magnetometer. In Proceedings of the 2019 IEEE International Instrumentation and Measurement Technology Conference (I2MTC); 2019; pp. 1–5. [Google Scholar] [CrossRef]
  149. Furuta, T.; Fujii, K.; Nakajima, K.; Tsunegi, S.; Kubota, H.; Suzuki, Y.; Miwa, S. Macromagnetic simulation for reservoir computing utilizing spin dynamics in magnetic tunnel junctions. Phys. Rev. Appl. 2018, 10, 034063. [Google Scholar] [CrossRef]
  150. Taniguchi, T.; Ogihara, A.; Utsumi, Y.; Tsunegi, S. Spintronic reservoir computing without driving current or magnetic field. Sci. Rep. 2022, 12, 10627. [Google Scholar] [CrossRef] [PubMed]
  151. Vidamour, I.T.; Swindells, C.; Venkat, G.; Manneschi, L.; Fry, P.W.; Welbourne, A.; Rowan-Robinson, R.M.; Backes, D.; Maccherozzi, F.; Dhesi, S.S.; et al. Reconfigurable reservoir computing in a magnetic metamaterial. Commun. Phys. 2023, 6, 230. [Google Scholar] [CrossRef]
  152. Edwards, A.J.; Bhattacharya, D.; Zhou, P.; McDonald, N.R.; Misba, W.A.; Loomis, L.; García-Sánchez, F.; Hassan, N.; Hu, X.; Chowdhury, M.F.; et al. Data-driven denoising of stationary accelerometer signals. Commun. Phys. 2023, 6, 215. [Google Scholar] [CrossRef]
  153. Ivanov, E.N.; Tobar, M.E.; Woode, R.A. Microwave interferometry: application to precision measurements and noise reduction techniques. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 1998, 45, 1526–1536. [Google Scholar] [CrossRef] [PubMed]
  154. Maksymov, I.S.; Kostylev, M. Broadband stripline ferromagnetic resonance spectroscopy of ferromagnetic films, multilayers and nanostructures. Physica E 2015, 69, 253–293. [Google Scholar] [CrossRef]
  155. Degen, C.L.; Reinhard, F.; Cappellaro, P. Quantum sensing. Rev. Mod. Phys. 2017, 89, 035002. [Google Scholar] [CrossRef]
  156. Jeske, J.; Cole, J.H.; Greentree, A.D. Laser threshold magnetometry. New J. Phys. 2016, 18, 013015. [Google Scholar] [CrossRef]
  157. Templier, S.; Cheiney, P.; d’Armagnac de Castanet, Q.; Gouraud, B.; Porte, H.; Napolitano, F.; Bouyer, P.; Battelier, B.; Barrett, B. Tracking the vector acceleration with a hybrid quantum accelerometer triad. Sci. Adv. 2022, 8, eadd3854. [Google Scholar] [CrossRef]
  158. Tasker, J.F.; Frazer, J.; Ferranti, G.; Matthews, J.C.F. A Bi-CMOS electronic photonic integrated circuit quantum light detector. Sci. Adv. 2024, 10, eadk6890. [Google Scholar] [CrossRef]
  159. Preskill, J. Quantum Computing in the NISQ era and beyond. Quantum 2018, 2, 79. [Google Scholar] [CrossRef]
  160. Arute, F.; Arya, K.; Babbush, R.; Martinis, M. Quantum supremacy using a programmable superconducting processor. Nature 2019, 574, 505–510. [Google Scholar] [CrossRef] [PubMed]
  161. Fujii, K.; Nakajima, K. Harnessing Disordered-Ensemble Quantum Dynamics for Machine Learning. Phys. Rev. Appl. 2017, 8, 024030. [Google Scholar] [CrossRef]
  162. Abbas, A.H.; Maksymov, I.S. Reservoir Computing Using Measurement-Controlled Quantum Dynamics. Electronics 2024, 13, 1164. [Google Scholar] [CrossRef]
  163. Park, S.; Baek, H.; Yoon, J.W.; Lee, Y.K.; Kim, J. AQUA: Analytics-driven quantum neural network (QNN) user assistance for software validation. Future Gener. Comput. Syst. 2024, 159, 545–556. [Google Scholar] [CrossRef]
  164. Sannia, A.; Martínez-Peña, R.; Soriano, M.C.; Giorgi, G.L.; Zambrini, R. Dissipation as a resource for Quantum Reservoir Computing. Quantum 2024, 8, 1291. [Google Scholar] [CrossRef]
  165. Marković, D.; Grollier, J. Quantum neuromorphic computing. Appl. Phys. Lett. 2020, 117, 150501. [Google Scholar] [CrossRef]
  166. Griffiths, D.J. Introduction to Quantum Mechanics; Prentice Hall: New Jersey, 2004. [Google Scholar]
  167. Nielsen, M.; Chuang, I. Quantum Computation and Quantum Information; Oxford University Press: New York, 2002. [Google Scholar]
  168. Zhang, J.; Pagano, G.; Hess, P.; et al. Observation of a many-body dynamical phase transition with a 53-qubit quantum simulator. Nature 2017, 551, 601–604. [Google Scholar] [CrossRef] [PubMed]
  169. Pino, J.; Dreiling, J.; Figgatt, C.; et al. Demonstration of the trapped-ion quantum CCD computer architecture. Nature 2021, 592, 209–213. [Google Scholar] [CrossRef]
  170. Negoro, M.; Mitarai, K.; Fujii, K.; Nakajima, K.; Kitagawa, M. Machine learning with controllable quantum dynamics of a nuclear spin ensemble in a solid. arXiv 2018. [Google Scholar]
  171. Chen, J.; Nurdin, H.I.; Yamamoto, N. Temporal information processing on noisy quantum computers. Physical Review Applied 2020, 14, 024065. [Google Scholar] [CrossRef]
  172. Dasgupta, S.; Hamilton, K.E.; Banerjee, A. Characterizing the memory capacity of transmon qubit reservoirs. In Proceedings of the 2022 IEEE International Conference on Quantum Computing and Engineering (QCE); 2022; pp. 162–166. [Google Scholar]
  173. Cai, Y.; et al. Multimode entanglement in reconfigurable graph states using optical frequency combs. Nature Communications 2017, 8, 15645. [Google Scholar] [CrossRef] [PubMed]
  174. Nokkala, J.; et al. Reconfigurable optical implementation of quantum complex networks. New Journal of Physics 2018, 20, 053024. [Google Scholar] [CrossRef]
  175. Bravo, R.A.; Najafi, K.; Gao, X.; Yelin, S.F. Quantum Reservoir Computing Using Arrays of Rydberg Atoms. PRX Quantum 2022, 3, 030325. [Google Scholar] [CrossRef]
  176. Nakajima, K.; Fujii, K.; Negoro, M.; Mitarai, K.; Kitagawa, M. Boosting Computational Power through Spatial Multiplexing in Quantum Reservoir Computing. Phys. Rev. Appl. 2019, 11, 034021. [Google Scholar] [CrossRef]
  177. Marzuoli, A.; Rasetti, M. Computing spin networks. Ann. Phys. 2005, 318, 345–407. [Google Scholar] [CrossRef]
  178. Tserkovnyak, Y.; Loss, D. Universal quantum computation with ordered spin-chain networks. Phys. Rev. A 2011, 84, 032333. [Google Scholar] [CrossRef]
  179. Chen, J.M.; Nurdin, H.I. Learning nonlinear input–output maps with dissipative quantum systems. Quantum Inf Process 2019, 18, 1440–1451. [Google Scholar]
  180. Mujal, P.; Nokkala, J.; Martínez-Peña, R.; Giorgi, G.L.; Soriano, M.C.; Zambrini, R. Analytical evidence of nonlinearity in qubits and continuous-variable quantum reservoir computing. J. Phys. Complexity 2021, 2, 045008. [Google Scholar] [CrossRef]
  181. Martínez-Peña, R.; Nokkala, J.; Giorgi, G.L. Information processing capacity of spin-based quantum reservoir computing systems. Cogn. Comput. 2023, 15, 1440–1451. [Google Scholar] [CrossRef]
  182. Hanson, R.; Awschalom, D.D. Coherent manipulation of single spins in semiconductors. Nature 2008, 453, 1043–1049. [Google Scholar] [CrossRef]
  183. Loss, D.; DiVincenzo, D.P. Quantum computation with quantum dots. Physical Review A 1998, 57, 120. [Google Scholar] [CrossRef]
  184. Wendin, G. Quantum information processing with superconducting circuits: a review. Reports on Progress in Physics 2017, 80, 106001. [Google Scholar] [CrossRef] [PubMed]
  185. Clarke, J.; Wilhelm, F.K. Superconducting quantum bits. Nature 2008, 453, 1031–1042. [Google Scholar] [CrossRef]
  186. Blatt, R.; Wineland, D. Entangled states of trapped atomic ions. Nature 2008, 453, 1008–1015. [Google Scholar] [CrossRef] [PubMed]
  187. Monroe, C.; Kim, J. Scaling the ion trap quantum processor. Science 2013, 339, 1164–1169. [Google Scholar] [CrossRef] [PubMed]
  188. Fujii, K.; Nakajima, K. Harnessing disordered-ensemble quantum dynamics for machine learning. Phys. Rev. Appl. 2017, 8, 024030. [Google Scholar] [CrossRef]
  189. Singh, S.P. The Ising Model: Brief Introduction and Its Application. In Metastable, Spintronics Materials and Mechanics of Deformable Bodies; Sivasankaran, S., Nayak, P.K., Günay, E., Eds.; IntechOpen: Rijeka, Croatia, 2020. [Google Scholar] [CrossRef]
  190. Sannia, A.; Martínez-Peña, R.; Soriano, M.C.; Giorgi, G.L.; Zambrini, R. Dissipation as a resource for Quantum Reservoir Computing. arXiv 2022. [Google Scholar]
  191. Lüders, G. Über die Zustandsänderung durch den Meßprozeß. Annalen der Physik 1950, 443, 322–328. [Google Scholar] [CrossRef]
  192. Von Neumann, J. Mathematische Grundlagen der Quantenmechanik; Springer-Verlag, 2013. [Google Scholar]
  193. Xia, W.; Zou, J.; Qiu, X. The reservoir learning power across quantum many-body localization transition. Front. Phys. 2022, 17, 33506. [Google Scholar] [CrossRef]
  194. Ponte, P.; Chandran, A.; Papić, Z.; Abanin, D.A. Periodically driven ergodic and many-body localized quantum systems. Annals of Physics. 2015, 353, 196–204. [Google Scholar] [CrossRef]
  195. Altshuler, B.; Krovi, H.; Roland, J. Anderson localization makes adiabatic quantum optimization fail. Proc Natl Acad Sci USA. 2010, 28, 12446–50. [Google Scholar] [CrossRef]
  196. Horodecki, P.; Rudnicki, L.; Życzkowski, K. Five open problems in quantum information theory. PRX Quantum 2022, 3, 010101. [Google Scholar] [CrossRef]
  197. Qiu, P.H.; Chen, X.G.; Shi, Y.W. Detecting entanglement with deep quantum neural networks. IEEE Access 2019, 7, 94310–94320. [Google Scholar] [CrossRef]
  198. Peral-García, D.; Cruz-Benito, J.; García-Peñalvo, F.J. Systematic literature review: Quantum machine learning and its applications. Comput. Sci. Rev. 2024, 51, 100619. [Google Scholar] [CrossRef]
  199. Campaioli, F.; Cole, J.H.; Hapuarachchi, H. A Tutorial on Quantum Master Equations: Tips and tricks for quantum optics, quantum computing and beyond. arXiv 2023. [Google Scholar]
  200. Bollt, E. On explaining the surprising success of reservoir computing forecaster of chaos? The universal machine learning dynamical system with contrast to VAR and DMD. Chaos 2021, 31, 013108. [Google Scholar] [CrossRef] [PubMed]
  201. Gauthier, D.J.; Bollt, E.; Griffith, A.; Barbosa, W.A.S. Next generation reservoir computing. Nat. Commun. 2021, 12, 5564. [Google Scholar] [CrossRef] [PubMed]
  202. Kalfus, W.D.; Ribeill, G.J.; Rowlands, G.E.; Krovi, H.K.; Ohki, T.A.; Govia, L.C.G. Hilbert space as a computational resource in reservoir computing. Phys. Rev. Res. 2022, 4, 033007. [Google Scholar] [CrossRef]
  203. Frattini, N.E.; Sivak, V.V.; Lingenfelter, A.; Shankar, S.; Devoret, M.H. Optimizing the nonlinearity and dissipation of a SNAIL parametric amplifier for dynamic range. Phys. Rev. Appl. 2018, 10, 054020. [Google Scholar] [CrossRef]
  204. Yasuda, T.; Suzuki, Y.; Kubota, T.; Nakajima, K.; Gao, Q.; Zhang, W.; Shimono, S.; Nurdin, H.I.; Yamamoto, N. Quantum reservoir computing with repeated measurements on superconducting devices. arXiv 2023. [Google Scholar]
  205. Mujal, P.; Martínez-Peña, R.; Giorgi, G.L.; Soriano, M.C.; Zambrini, R. Time-series quantum reservoir computing with weak and projective measurements. npj. Quantum. Inf. 2023, 9, 021008. [Google Scholar] [CrossRef]
  206. Harrington, P.M.; Monroe, J.T.; Murch, K.W. Quantum Zeno Effects from Measurement Controlled Qubit-Bath Interactions. Phys. Rev. Lett. 2017, 118, 240401. [Google Scholar] [CrossRef]
  207. Raimond, J.M.; Facchi, P.; Peaudecerf, B.; Pascazio, S.; Sayrin, C.; Dotsenko, I.; Gleyzes, S.; Brune, M.; Haroche, S. Quantum Zeno dynamics of a field in a cavity. Phys. Rev. A 2012, 86, 032120. [Google Scholar] [CrossRef]
  208. Lewalle, P.; Martin, L.S.; Flurin, E.; Zhang, S.; Blumenthal, E.; Hacohen-Gourgy, S.; Burgarth, D.; Whaley, K.B. A Multi-Qubit Quantum Gate Using the Zeno Effect. Quantum 2023, 7, 1100. [Google Scholar] [CrossRef]
  209. Kondo, Y.; Matsuzaki, Y.; Matsushima, K.; Filgueiras, J.G. Using the quantum Zeno effect for suppression of decoherence. New J. Phys. 2016, 18, 013033. [Google Scholar] [CrossRef]
  210. Monras, A.; Romero-Isart, O. Quantum Information Processing with Quantum Zeno Many-Body Dynamics. arXiv 2009. [Google Scholar] [CrossRef]
  211. Paz-Silva, G.A.; Rezakhani, A.T.; Dominy, J.M.; Lidar, D.A. Zeno effect for quantum computation and control. Phys. Rev. Lett. 2012, 108, 080501. [Google Scholar] [CrossRef]
  212. Burgarth, D.K.; Facchi, P.; Giovannetti, V.; Nakazato, H.; Yuasa, S.P.K. Exponential rise of dynamical complexity in quantum computing through projections. Nat. Commun. 2014, 5, 5173. [Google Scholar] [CrossRef]
  213. Feynman, R.P. There’s Plenty of Room at the Bottom. The annual meeting of the American Physical Society at the California Institute of Technology 1959. (accessed on 13 June 2024). [Google Scholar]
Figure 1. (ac) Schematic illustration of the physical processes—turbulence, water waves and vibrations caused by surface roughness—that onboard AI systems can employ as a means of energy-efficient computation. The so-envisioned AI systems can be used in UGVs, UAVs and ROVs as well as human-operated aeroplanes, boats and cars. (d) Identified as a separate category, quantum-mechanical physical systems can be used as onboard AI systems. Superior computational performance and low power consumption of quantum systems compared with traditional computers render them especially useful for applications in lightweight and long-range drones.
Figure 1. (ac) Schematic illustration of the physical processes—turbulence, water waves and vibrations caused by surface roughness—that onboard AI systems can employ as a means of energy-efficient computation. The so-envisioned AI systems can be used in UGVs, UAVs and ROVs as well as human-operated aeroplanes, boats and cars. (d) Identified as a separate category, quantum-mechanical physical systems can be used as onboard AI systems. Superior computational performance and low power consumption of quantum systems compared with traditional computers render them especially useful for applications in lightweight and long-range drones.
Preprints 109463 g001
Figure 2. Schematic representation of (a) a traditional algorithmic RC system and (b) a computational system with a physical reservoir constructed using the physical effects that take place in the environment that surrounds moving vehicles. The mathematical meaning of the vectors and matrices mentioned in this figure is explained in the main text.
Figure 2. Schematic representation of (a) a traditional algorithmic RC system and (b) a computational system with a physical reservoir constructed using the physical effects that take place in the environment that surrounds moving vehicles. The mathematical meaning of the vectors and matrices mentioned in this figure is explained in the main text.
Preprints 109463 g002
Figure 3. (a) Photographs of the prototype of ROV designed to test AI systems that employ water waves and other disturbances caused by the motion of the drone as a means of computation. (b) Schematic illustration of the difference between the wave patterns produced by the ROV that has received the left, forward and right commands from the operator.
Figure 3. (a) Photographs of the prototype of ROV designed to test AI systems that employ water waves and other disturbances caused by the motion of the drone as a means of computation. (b) Schematic illustration of the difference between the wave patterns produced by the ROV that has received the left, forward and right commands from the operator.
Preprints 109463 g003
Figure 5. Sketch of an ROV, bubbles created by its propellers and highly nonlinear acoustic waves emitted by them. As discussed in the main text, the nonlinear dynamics of these physical processes opens up novel opportunities for physical reservoir computing.
Figure 5. Sketch of an ROV, bubbles created by its propellers and highly nonlinear acoustic waves emitted by them. As discussed in the main text, the nonlinear dynamics of these physical processes opens up novel opportunities for physical reservoir computing.
Preprints 109463 g005
Figure 6. False-colour map composed of a large number of theoretical frequency spectra of a single gas bubble trapped in the bulk of water and excited with a sinusoidal acoustic pressure wave. The x-axis corresponds to the normalised frequency f / f a , where f a is the frequency of the incident acoustic wave. The y-axis denotes the peak pressure of the incident acoustic wave. Using this figure, we can see the evolution of the frequecny spectrum of the bubble as a function of the peak pressure of the incident wave. For example, at 260 kPa we can identify the frequency peaks with the frequencies f / f a = 1 , 2 , 3 and so on. At 320 kPa, we observe the appearance of the subharmonic peak at f / f a = 1 2 and its ultraharmonic components f / f a = 3 2 , 5 2 and so on.
Figure 6. False-colour map composed of a large number of theoretical frequency spectra of a single gas bubble trapped in the bulk of water and excited with a sinusoidal acoustic pressure wave. The x-axis corresponds to the normalised frequency f / f a , where f a is the frequency of the incident acoustic wave. The y-axis denotes the peak pressure of the incident acoustic wave. Using this figure, we can see the evolution of the frequecny spectrum of the bubble as a function of the peak pressure of the incident wave. For example, at 260 kPa we can identify the frequency peaks with the frequencies f / f a = 1 , 2 , 3 and so on. At 320 kPa, we observe the appearance of the subharmonic peak at f / f a = 1 2 and its ultraharmonic components f / f a = 3 2 , 5 2 and so on.
Preprints 109463 g006
Figure 7. (a) Photograph of the UGV controlled by a physical RC system that employs the road roughness and vibrations of a whiskered sensor as a means of computation. (b) Whiskered sensor and the detectors used to monitor the nonlinear vibrations of the spring caused by the rough terrain. (c) Schematic diagram of the onboard reservoir computer that exploits the nonlinear dynamics of the vibrating spring and processes data using a microcontroller. Identifying the texture of the terrain, the microcontroller computes as optimal trajectory of the vehicle. (d) Axial displacement and natural angular frequency of the vibrational modes of the tapered spring obtained using a theoretical model (top row) and rigorous numerical simulations by means of a finite-elements method. Adapted from Ref. [126] under the terms of a Creative Commons Attribution 4.0 International License.
Figure 7. (a) Photograph of the UGV controlled by a physical RC system that employs the road roughness and vibrations of a whiskered sensor as a means of computation. (b) Whiskered sensor and the detectors used to monitor the nonlinear vibrations of the spring caused by the rough terrain. (c) Schematic diagram of the onboard reservoir computer that exploits the nonlinear dynamics of the vibrating spring and processes data using a microcontroller. Identifying the texture of the terrain, the microcontroller computes as optimal trajectory of the vehicle. (d) Axial displacement and natural angular frequency of the vibrational modes of the tapered spring obtained using a theoretical model (top row) and rigorous numerical simulations by means of a finite-elements method. Adapted from Ref. [126] under the terms of a Creative Commons Attribution 4.0 International License.
Preprints 109463 g007
Figure 8. Sketch of a spin-network-based QRC system. The system operates as follows. First, the classical input data are injected into the spin network, altering its quantum state. Then, the quantum dynamics of the network evolves under the influence of the input data. Finally, the network produces an output signal derived from certain observables of the system. Redrawn after Refs. [180,188].
Figure 8. Sketch of a spin-network-based QRC system. The system operates as follows. First, the classical input data are injected into the spin network, altering its quantum state. Then, the quantum dynamics of the network evolves under the influence of the input data. Finally, the network produces an output signal derived from certain observables of the system. Redrawn after Refs. [180,188].
Preprints 109463 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated