Preprint
Review

Theory of Morphodynamic Information Processing: Linking Sensing to Behaviour

Altmetrics

Downloads

329

Views

719

Comments

0

Submitted:

02 September 2024

Posted:

04 September 2024

You are already at the latest version

Alerts
Abstract
The traditional understanding of brain function has predominantly focused on chemical and electrical processes. However, new research in fruit fly (Drosophila) binocular vision reveals ultrafast photomechanical photoreceptor movements significantly enhance information processing, thereby impacting a fly's perception of its environment and behaviour. The coding advantages resulting from these mechanical processes suggest that similar physical motion-based coding strategies may affect neural communication ubiquitously. The theory of neural morphodynamics proposes that rapid biomechanical movements and microstructural changes at the level of neurons and synapses enhance the speed and efficiency of sensory information processing, intrinsic thoughts, and actions by regulating neural information in a phasic manner. We propose that morphodynamic information processing evolved to drive predictive coding, synchronising cognitive processes across neural networks to match the behavioural demands at hand effectively.
Keywords: 
Subject: Biology and Life Sciences  -   Neuroscience and Neurology

1. Introduction

Behaviour arises from intrinsic changes in brain activity and responses to external stimuli, guided by animals’ heritable characteristics and cognition that shape and adjust nervous systems to maximise survival. In this dynamic world governed by the laws of thermodynamics, brains are never static. Instead, their inner workings actively utilise and store electrochemical, kinetic, and thermal energy, constantly moving and adapting in response to intrinsic activity and environmental shifts orchestrated by genetic information encoded in DNA. However, our attempts to comprehend the resulting neural information sampling, processing, and codes often rely on stationarity assumptions and reductionist behaviour or reductionist brain activity analyses. Unfortunately, these preconceptions can prevent us from appreciating the role of rapid biomechanical movements and microstructural changes of neurons and synapses, which we call neural morphodynamics, in sensing and behaviours.
While electron-microscopic brain atlases provide detailed wiring maps at the level of individual synapses [1,2,3,4,5], they fail to capture the continuous motion of cells. Fully developed neurons actively move, with their constituent molecules, molecular structures, dendritic spines and cell bodies engaging in twitching motions that facilitate signal processing and plasticity [6,7,8,9,10,11,12,13,14,15,16,17,18,19] (Figure 1A–E). Additionally, high-speed in vivo X-ray holography [15], electrophysiology [20] and calcium-imaging [11] of neural activity suggest that ultrafast bursty or microsaccadic motion influences the release of neurotransmitter quanta, adding an extra layer of complexity to neuronal processing.
Recent findings on sensory organs and graded potential synapses provide compelling evidence for the crucial role of rapid morphodynamic changes in neural information sampling and synaptic communication [7,9,13,14,15,20,24,44]. In Drosophila, these phasic changes enhance performance and efficiency by synchronising responses to moving stimuli, effectively operating as a form of predictive coding[13,15]. These changes empower the small fly brain to achieve remarkable capabilities[13,14,15], such as hyperacute stereovision[15] and time-normalised and aliasing-free responsiveness to naturalistic light contrasts of changing velocity, starting from photoreceptors and the first visual synapse[13,45]. Importantly, given the compound eyes’ small size, these encoding tasks would only be physically possible with active movements[13,15,45]. Ultrafast photoreceptor microsaccades enable flies to perceive 2- and 3-dimensional patterns 4-10 times finer than the static pixelation limit imposed by photoreceptor spacing[46,47]. Thus, neural morphodynamics can be considered a natural extension of animals’ efficient saccadic encoding strategy to maximise sensory information while linking perception with behaviour (Figure 1F–M)[13,15,28,29,30,31,33].
Overarching questions remain: Are morphodynamic information sampling and processing prevalent across all brain networks, coevolving with morphodynamic sensing to amplify environmental perception, action planning, and behavioural execution? How does the genetic information, accumulated over hundreds of millions of years and stored in DNA, shape and drive brain morphodynamics to maximise the sampling and utilisation of sensory information within the biological neural networks of animals throughout their relatively short lifespans, ultimately improving fitness? Despite the diverse functions and morphologies observed in animals, operating similar molecular motors and reaction cascades within compartmentalised substructures by their neurons suggests that the morphodynamic code may be universal.
This review delves into this phenomenon, specifically focusing on recent discoveries in insect vision and visually guided behaviour. Insects have adapted to colonise all habitats except the deep sea, producing complex building behaviour and societies, exemplified by ants, bees, and termites. Furthermore, insects possess remarkable cognitive abilities that often rely on hyperacute perception. For instance, paper wasps can recognise individual faces among their peers[48,49], while Drosophila can distinguish minute parasitic wasp females from harmless males[50]. Hyperacute patterns of flat images depicting copulating Drosophila can also trigger mate-copying behaviour in virgin, naïve observer females, even in the absence of olfactory or other sensory cues[51]. These findings alone challenge the prevailing theoretical concepts[52,53], as the visual patterns tested may occupy only a pixel or two if viewed from the experimental positions through static compound eyes. Instead, we elucidate how such heightened performance naturally emerges from ultrafast morphodynamics in sensory processing and behaviours[13,15], emphasising their crucial role in enhancing perception and generating reliable neural representations of the variable world. Additionally, we propose underlying representational rules and general mechanisms governing morphodynamic sampling and information processing, to augment intelligence and cognition. We hope these ideas will pave the way for new insights and avenues in neuroscience research and our understanding of behaviour.

2. Sensing Requires Motion, and Moving Sensors Improve Sensing

The structure and function of sense organs have long been recognised as factors that limit the quantity and variety of information they can gather [46,52,54]. However, a more recent insight reveals that the process of sensing itself is an active mechanism, utilising bursty or saccadic movements to enhance information sampling [13,14,15,28,32,33,35,55,56,57,58,59,60] (Figure 1). These movements encompass molecular, sensory cell, whole organ, head, and body motions, collectively and independently enhancing perception and behaviour. Text Box 1 introduces and summarises the general structure-function relationships of these motion-driven local and global image sampling mechanisms in active vision, using Drosophila as an example.
Text Box 1. Motion-Driven Image Sampling in Morphodynamic Active Vision. This text box graphically illustrates two fundamental sampling mechanisms that enhance vision through motion, using Drosophila as a model organism: (A) local sampling at the level of individual photoreceptors (“single-pixel”) and (B) global sampling across the entire retinal matrix (whole-matrix). These interactive processes, which jointly affect the eyes’ spatiotemporal resolution, stereoscopic range and adaptive capabilities, likely co-evolved to optimise visual perception and behaviour in dynamic natural environments.
Text Box 1. Motion-Driven Image Sampling in Morphodynamic Active Vision. This text box graphically illustrates two fundamental sampling mechanisms that enhance vision through motion, using Drosophila as a model organism: (A) local sampling at the level of individual photoreceptors (“single-pixel”) and (B) global sampling across the entire retinal matrix (whole-matrix). These interactive processes, which jointly affect the eyes’ spatiotemporal resolution, stereoscopic range and adaptive capabilities, likely co-evolved to optimise visual perception and behaviour in dynamic natural environments.
Preprints 117016 g002
(A) Local Microsaccadic Image Sampling. The photomechanical movement of R1-8 photoreceptors within an ommatidium enhances vision through morphodynamic sampling mechanisms:
(i)
Photomechanical Receptive Field Scanning Motion: Each photoreceptor’s microvillar phototransduction reactions generate rapid contractions in response to light intensity changes, causing the photoreceptor’s waveguide, the rhabdomere (containing 30,000 microvilli), to twitch[8,13,61] (grey double-headed arrows). These twitches create microsaccades that dynamically shift and narrow the photoreceptor’s receptive field[15] (red Gaussian). Unlike uniform, reflex-like contractions, microsaccades are actively regulated and continuously adjust photon sampling dynamics. This auto-regulation optimises photoreceptors’ receptive fields in response to environmental light changes to maximise information capture[13,14,15]. These dynamics rapidly adapt to ongoing light exposure, varying with both light intensity (dim vs bright conditions) and contrast type (positive for light increments, negative for light decrements)[13,14,15]. From a sampling theory perspective, photoreceptor microsaccades constitute a form of ultrafast, morphodynamic active sampling.
(ii)
Local Directionality: During a photomechanical microsaccade, photoreceptors contract axially (up-down arrow), moving away from the lens to narrow the receptive field while swinging sideways (left-right arrow) in a direction specific to their eye location, moving the receptive field[14,15]. These lateral movements are predetermined during development as the ommatidial R1-8 photoreceptor alignment gradually rotates across the eyes[14,15], forming a diamond shape (with green lines indicating the rotation axis).
(iii)
Insularity, Symmetry and Adaptability: Local pinhole light stimulation (yellow dot) triggers microsaccades only in the photoreceptors of the illuminated ommatidia (yellow and red bars), while the photoreceptors in the neighbouring ommatidia (dark blue bars) remain still[15]. The left and right eyes show mirror-symmetric microsaccade directions[14,15], but local microsaccades themselves are not uniform[13,14,15]. Their speed and magnitude adapt to ambient light changes, becoming faster and shorter in brighter environments, indicating light-intensity dependency[13,14,15].
(iv)
Collective Motion: Within an ommatidium, R1-8 photoreceptor movements are interdependent. When one photoreceptor is activated by light, all R1-8 photoreceptors move together in a unified direction[15]. This coordinated motion likely arises from the photoreceptors’ structural pivoting and the linkage of their rhabdomere tips to the ommatidial cone cells via adherens junctions[13,62]. If all photoreceptors are activated, their combined microsaccades produce a larger collective movement[15]. A local UV light stimulus activating all R1-6 photoreceptors and R7 results in the largest microsaccades[15].
(v)
Asymmetry and Tiling: The asymmetric arrangement of R1-8 photoreceptors around the ommatidium lens causes R1-6 photoreceptors’ receptive fields (coloured Gaussians), pooled from neighbouring ommatidia, to tile the visual space over-completely[15,63] and move independently (correspondingly coloured arrows) in slightly different directions during synaptic transmission to large monopolar cells[15].
(B) Global Image Sampling Movements: Drosophila uses retinal, head, and body movements to adapt and enhance its vision in response to behavioural needs and environmental changes.
(i)
Top-Down Control: In closed-loop interactions with the environment, the fly brain exerts global control over visual information by executing attentive perception[64,65,66] and adaptive behaviours.
(ii)
Goal-Directed Behaviours: The fly brain coordinates translational, rotational[67] and vergence movements[35] through retinal motoneurons and effector neurons that control muscles, ligaments, and tendons.
(iii)
Self-Motion-Induced Optic Flow: Retinal, head, and body movements, including saccades[33,67,68] and vergence[35,36,37], shift the entire retina, refreshing neural images across the visual field and preventing perceptual fading due to fast adaptation. The interplay between these global movements and (A) orientation-sensitive photoreceptor microsaccades generates complex spatiotemporal sampling dynamics. While microsaccades can independently enhance neural responses to local visual changes, whole retina movements rely on this interaction for full effectiveness. Each retinal movement alters the photoreceptor light input[15], with moving scenes triggering photomechanical microsaccades - rippling wave patterns synchronised with contrast changes - across the retina, except in complete darkness or uniform, zero-contrast environments.
(iv)
Coordinated Adjustments and Activity State: Retinal muscles enable the left and right retinae (illustrated by the left and right ommatidial matrix) to move independently[35] (red and blue four-headed arrows), providing precise control during an attentive viewing[35,64,65], including optokinetic retinal tracking and other behaviours. For example, by pulling the retinae inward (convergence) or outward (divergence), these muscles can adjust the number of frontal photoreceptors involved in stereopsis, dynamically altering the eyes’ stereo range while preserving the integrity of the compound eyes’ lens surface and surrounding exoskeleton. Interestingly, whole-retina movements, driven by retinal motor neuron activity, are seldom observed in intact, fully immobilised, head-fixed Drosophila, such as during intracellular electrophysiological recordings or in vivo photoreceptor imaging[13,14,15]. Instead, one occasionally notes slow retinal drifting, likely due to changes in muscle tone affecting retinal tension, which necessitates recentring the light stimulus[13]. However, the whole-retina movements become more frequent and pronounced when the flies are less restricted and actively engaged in stimulus tracking or visual behaviours, such as during tethered flight or while walking on a trackball [35,36,37]. These findings align with previous observations from two-photon calcium imaging [69,70,71,72,73] and extracellular electrophysiology[64,66,74], which demonstrate that the fly’s behavioural state influences neural activity in the fly brain’s visual processing centres.
Because compound eyes extend from the rigid head exoskeleton, appearing stationary to an outside observer, the prima facie is that their inner workings would also be immobile[52,53,75]. Therefore, as the eyes’ ommatidial faceting sets their photoreceptor spacing, the influential static theory of compound eye optics postulates that insects can only see a “pixelated” low-resolution image of the world. According to this traditional static viewpoint, the ommatidial grid limits the granularity of the retinal image and visual acuity. Resolving two stationary objects requires at least three photoreceptors, and this task becomes more challenging when objects are in motion, further reducing visual acuity. The presumed characteristics associated with small static compound eyes, including large receptive fields, slow integration times, and spatial separation of photoreceptors, commonly attributed to spherical geometry, contribute to motion blur that impairs the detailed resolution of moving objects within the visual field[52]. As a result, male Drosophila relying on coarse visual information face a real dilemma in distinguishing between a receptive female fly and a hungry spider. To accurately differentiate, the male must closely approach the subject to detect distinguishing characteristics such as body shape, colour patterns, or movements. In this context, the difference between sex and death may hinge on an invisible line.
Recent studies on Drosophila have challenged the notion that fixed factors such as photoreceptor spacing, integration time, and receptive field size solely determine visual acuity[13,15]. Instead, these characteristics are dynamically regulated by photoreceptor photomechanics[13,14,15], leading to significant improvements in vision through morphodynamic processes. In the following subsections, we begin by explaining how microsaccadic movements of photoreceptors enable hyperacute image sampling (Figure 2), phasic image contrast enhancement (Figure 3), information maximisation during saccadic behaviours (Figure 4), hyperacute stereovision (Figure 5), and antialiased vision (Figure 6). We then relate these predominantly local image sampling dynamics to the global movements of the retina, head, and entire animal in goal-oriented visual behaviours. Finally, we discuss the generic benefits of neural morphodynamics. Through specific examples, we explore how morphodynamic information sampling and processing adapt to maximise information allocation in neural channels (Figure 7). We also link multiscale observations with Gedankenexperiments to envision how these ultrafast phasic processes synchronise the brain’s neural representation of the external world with its dynamic changes (Figure 8), thereby enhancing cognitive abilities and efficiency. Some of these concepts related to neural computations, intelligence, and future technologies are further explored in Text Boxes 2–4.
Figure 2. Photomechanical photoreceptor microsaccades enhance insect vision through adaptive compound eye optics. (A) High-speed infrared deep-pseudopupil microscopy [14,15] uncovers the intricate movement dynamics and specific directions of light-induced photoreceptor microsaccades across the compound eyes in living Drosophila. Fully immobilising the flies inside a pipette tip minimises whole-retina movements, allowing one to record photoreceptor microsaccade dynamics in isolation[14,15]. (B) During a microsaccade within an ommatidium, the R1-R7/8 photoreceptors undergo rapid axial (inward) contraction and sideways movement along the R1-R2-R3 direction, executing a complex piston motion[14,15]. Meanwhile, the lens positioned above them, as an integral component of the rigid exoskeleton, remains stationary[13]. (C) When a moving light stimulus, such as two bright dots, traverses a photoreceptor’s (shown for R5) receptive field (RF), the photoreceptor rapidly contracts away from the lens, causing the RF to narrow[13,15]. Simultaneously, the photoreceptor’s swift sideways movement, aided by the lens acting as an optical lever, results in the RF moving in the opposite direction (of about 40-60°/s, illustrated here for movement with or against the stimuli). As a result, in a morphodynamic compound eye, the photoreceptor responses (depicted by blue and red traces) can detect finer and faster changes in moving stimuli than what the previous static compound eye theory predicts (represented by black traces). (D) Microsaccades result from photomechanical processes involving refractory photon sampling dynamics within the 30,000 microvilli [8,13,15,61], which comprise the light-sensitive part of a photoreceptor known as the rhabdomere. Each microvillus encompasses the complete phototransduction cascade, enabling the conversion of successful photon captures into elementary responses called quantum bumps. This photomechanical refractory sampling mechanism empowers photoreceptors to consistently estimate changes in environmental light contrast across a wide logarithmic intensity range. The intracellularly recorded morphodynamic quantal information sampling and processing (represented by dark blue traces) can be accurately simulated under various light conditions using biophysically realistic stochastic photoreceptor sampling models (illustrated by cyan traces) [13,15,76]. (E) Drosophila photoreceptor microsaccades shift their rhabdomeres sideways by around 1-1.5 µm (maximum < 2 µm), resulting in receptive field movements of approximately 3-4.5° in the visual space. The receptive field half-widths of R1-6 photoreceptors cover the entire visual space, ranging from 4.5-6°. By limiting the micro-scanning to the interommatidial angle, Drosophila integrates a neural image that surpasses the optical limits of its compound eyes. Honeybee photoreceptor microsaccades shift their receptive fields by < 1°, smaller than the average receptive field half-width (~1.8°) at the front of the eye. This active sampling strategy in honeybees is similar to Drosophila and suggests that honeybee vision also surpasses the static pixelation limit of its compound eyes [14]. Data are modified from the cited papers.
Figure 2. Photomechanical photoreceptor microsaccades enhance insect vision through adaptive compound eye optics. (A) High-speed infrared deep-pseudopupil microscopy [14,15] uncovers the intricate movement dynamics and specific directions of light-induced photoreceptor microsaccades across the compound eyes in living Drosophila. Fully immobilising the flies inside a pipette tip minimises whole-retina movements, allowing one to record photoreceptor microsaccade dynamics in isolation[14,15]. (B) During a microsaccade within an ommatidium, the R1-R7/8 photoreceptors undergo rapid axial (inward) contraction and sideways movement along the R1-R2-R3 direction, executing a complex piston motion[14,15]. Meanwhile, the lens positioned above them, as an integral component of the rigid exoskeleton, remains stationary[13]. (C) When a moving light stimulus, such as two bright dots, traverses a photoreceptor’s (shown for R5) receptive field (RF), the photoreceptor rapidly contracts away from the lens, causing the RF to narrow[13,15]. Simultaneously, the photoreceptor’s swift sideways movement, aided by the lens acting as an optical lever, results in the RF moving in the opposite direction (of about 40-60°/s, illustrated here for movement with or against the stimuli). As a result, in a morphodynamic compound eye, the photoreceptor responses (depicted by blue and red traces) can detect finer and faster changes in moving stimuli than what the previous static compound eye theory predicts (represented by black traces). (D) Microsaccades result from photomechanical processes involving refractory photon sampling dynamics within the 30,000 microvilli [8,13,15,61], which comprise the light-sensitive part of a photoreceptor known as the rhabdomere. Each microvillus encompasses the complete phototransduction cascade, enabling the conversion of successful photon captures into elementary responses called quantum bumps. This photomechanical refractory sampling mechanism empowers photoreceptors to consistently estimate changes in environmental light contrast across a wide logarithmic intensity range. The intracellularly recorded morphodynamic quantal information sampling and processing (represented by dark blue traces) can be accurately simulated under various light conditions using biophysically realistic stochastic photoreceptor sampling models (illustrated by cyan traces) [13,15,76]. (E) Drosophila photoreceptor microsaccades shift their rhabdomeres sideways by around 1-1.5 µm (maximum < 2 µm), resulting in receptive field movements of approximately 3-4.5° in the visual space. The receptive field half-widths of R1-6 photoreceptors cover the entire visual space, ranging from 4.5-6°. By limiting the micro-scanning to the interommatidial angle, Drosophila integrates a neural image that surpasses the optical limits of its compound eyes. Honeybee photoreceptor microsaccades shift their receptive fields by < 1°, smaller than the average receptive field half-width (~1.8°) at the front of the eye. This active sampling strategy in honeybees is similar to Drosophila and suggests that honeybee vision also surpasses the static pixelation limit of its compound eyes [14]. Data are modified from the cited papers.
Preprints 117016 g003
Figure 3. Saccadic Turns and Fixation Periods Enhance Information Extraction in Drosophila. (A) A representative walking trajectory of a fruit fly [67]. (B) Angular velocity and yaw of the recorded walk. (C) A 360° natural scene utilised to generate three distinct time series of light intensity[13]. The dotted white line indicates the intensity plane employed during the walk. The blue trace represents a light intensity over time generated by overlaying the walking fly’s yaw dynamics (A-B) onto the scene. The red trace corresponds to the time series of light intensity obtained by scanning the scene at the median velocity of the walk (linear: 63.3°/s). The grey trace depicts the time series of light intensity obtained using shuffled walking velocities. Brief saccades and longer fixation periods introduce burst-like patterns to the light input. (D) These light intensity time series were employed as stimuli in intracellular photoreceptor recordings and simulations using a biophysically realistic stochastic photoreceptor model. Both the recordings and simulations showed that saccadic viewing enhances information transmission in R1-6 photoreceptors, indicating that this mechanism has evolved with refractory photon sampling to maximise information capture from natural scenes[13]. Immobilising the flies (their head, proboscis and thorax) with beeswax[87,88] in a conical holder minimises whole-retina movements[13,14,15], enabling high signal-to-noise recording conditions to study photoreceptors’ voltage responses to dynamic light stimulation[13]. Data are modified from the cited papers.
Figure 3. Saccadic Turns and Fixation Periods Enhance Information Extraction in Drosophila. (A) A representative walking trajectory of a fruit fly [67]. (B) Angular velocity and yaw of the recorded walk. (C) A 360° natural scene utilised to generate three distinct time series of light intensity[13]. The dotted white line indicates the intensity plane employed during the walk. The blue trace represents a light intensity over time generated by overlaying the walking fly’s yaw dynamics (A-B) onto the scene. The red trace corresponds to the time series of light intensity obtained by scanning the scene at the median velocity of the walk (linear: 63.3°/s). The grey trace depicts the time series of light intensity obtained using shuffled walking velocities. Brief saccades and longer fixation periods introduce burst-like patterns to the light input. (D) These light intensity time series were employed as stimuli in intracellular photoreceptor recordings and simulations using a biophysically realistic stochastic photoreceptor model. Both the recordings and simulations showed that saccadic viewing enhances information transmission in R1-6 photoreceptors, indicating that this mechanism has evolved with refractory photon sampling to maximise information capture from natural scenes[13]. Immobilising the flies (their head, proboscis and thorax) with beeswax[87,88] in a conical holder minimises whole-retina movements[13,14,15], enabling high signal-to-noise recording conditions to study photoreceptors’ voltage responses to dynamic light stimulation[13]. Data are modified from the cited papers.
Preprints 117016 g004
Figure 4. The mirror-symmetric ommatidial photoreceptor arrangement and morphodynamics of the left and right eyes enhance detection of moving objects during visual behaviours. (A) The photoreceptor rhabdomere patterns (as indicated by their rotating orientation directions: yellow and green arrows) of the ommatidial left and right eyes (inset images) exhibit horizontal and ventral mirror symmetry, forming a concentrically expanding diamond shape[14,15,106]. (B) When a moving object, such as a fly, enters the receptive fields (RFs) of the corresponding frontal left and right photoreceptors (indicated by red and blue beams), the resulting light intensity changes cause the photoreceptors to contract mirror-symmetrically. (C) The half-widths of the frontal left and right eye R6 photoreceptors’ RFs (disks), projected 5 mm away from the eyes[15]. Red circles represent the RFs of neighbouring photoreceptors in the left visual field, blue in the right. (D) Contraction (light-on) moves R1-R7/8 photoreceptors (left) in R3-R2-R1 direction (fast-phase), recoil (light-off) returns them in opposite R1-R2-R3 direction (slow-phase)[14,15]. The corresponding fast-phase (centre) and slow-phase (right) RF vector maps. (E) The fast-phase RF map compared to the forward flying fly’s optic flow field (centre), as experienced with the fly head upright[15]. Their difference is shown right. The fast-phase matches the ground flow (light yellow pixels), while the opposite slow-phase (dark yellow pixels) matches the sky flow[15]. (F) During yaw rotation, the mirror-symmetric movement of the photoreceptor RFs in the left and right eyes enhances the binocular contrast differences in the surrounding environment (sample visualisation as panel E). Immobilising the flies inside a pipette tip, as was done for these recordings, minimises whole-retina movements, allowing for the isolated study of photoreceptor microsaccade dynamics[14,15]. Data are modified from the cited papers.
Figure 4. The mirror-symmetric ommatidial photoreceptor arrangement and morphodynamics of the left and right eyes enhance detection of moving objects during visual behaviours. (A) The photoreceptor rhabdomere patterns (as indicated by their rotating orientation directions: yellow and green arrows) of the ommatidial left and right eyes (inset images) exhibit horizontal and ventral mirror symmetry, forming a concentrically expanding diamond shape[14,15,106]. (B) When a moving object, such as a fly, enters the receptive fields (RFs) of the corresponding frontal left and right photoreceptors (indicated by red and blue beams), the resulting light intensity changes cause the photoreceptors to contract mirror-symmetrically. (C) The half-widths of the frontal left and right eye R6 photoreceptors’ RFs (disks), projected 5 mm away from the eyes[15]. Red circles represent the RFs of neighbouring photoreceptors in the left visual field, blue in the right. (D) Contraction (light-on) moves R1-R7/8 photoreceptors (left) in R3-R2-R1 direction (fast-phase), recoil (light-off) returns them in opposite R1-R2-R3 direction (slow-phase)[14,15]. The corresponding fast-phase (centre) and slow-phase (right) RF vector maps. (E) The fast-phase RF map compared to the forward flying fly’s optic flow field (centre), as experienced with the fly head upright[15]. Their difference is shown right. The fast-phase matches the ground flow (light yellow pixels), while the opposite slow-phase (dark yellow pixels) matches the sky flow[15]. (F) During yaw rotation, the mirror-symmetric movement of the photoreceptor RFs in the left and right eyes enhances the binocular contrast differences in the surrounding environment (sample visualisation as panel E). Immobilising the flies inside a pipette tip, as was done for these recordings, minimises whole-retina movements, allowing for the isolated study of photoreceptor microsaccade dynamics[14,15]. Data are modified from the cited papers.
Preprints 117016 g005
Figure 5. Drosophila visual behaviours exhibit hyperacute 3D vision, aligning with morphodynamic compound eye modelling.
Figure 5. Drosophila visual behaviours exhibit hyperacute 3D vision, aligning with morphodynamic compound eye modelling.
Preprints 117016 g006
(A) Drosophila compound eyes’ depth perception constraints and the computations for morphodynamic triangulation of object depth (z)[15]. k is the distance between the corresponding left and right eye photoreceptors, and t is their time-delay. tc is the time-delay between the neighbouring photoreceptors in the same eye. The left eye is represented by the red receptive field (RFs), while the right eye is represented by the blue RF. Simulated voltage responses (top) of three morphodynamically responding R6-photoreceptors when a 1.7° x 1.7° object (orange) moves across their overlapping RFs at a speed of 50°/s and a distance of 25 mm. The corresponding binocular cross-correlations (bottom), which represents the depth information, likely occur in the retinotopically organised neural cartridges of the lobula optic lobe, where location-specific ipsi- and contralateral photoreceptor information is pooled (green LC14 neuron[15]). Time delays between the maximum correlations (vertical lines) and the moment the object crosses the RF centre of the left R6-photoreceptor (vertical dashed line). (B) In neural superposition wiring[111], the R1-6 photoreceptors originating from six neighbouring ommatidia sample a moving stimulus (orange dot). Their overlapping receptive fields (RFs; coloured rings) swiftly bounce along their predetermined microsaccade directions (coloured arrows; see also Figure 4D) as the photoreceptors transmit information to large monopolar cells (LMC, specifically L1-L3, with L2 shown) and the lamina amacrine cells. While R7/8 photoreceptors share some information with R1 and R6 through gap junctions[105] R7/8 establish synapses in the medulla. Simulations reveal the superpositional R1-R7/8s’ voltage responses (coloured traces) with their phase differences when a 1.7° x 1.7° dot traverses their receptive fields at 100°/s (orange dot). 2-photon imaging of L2 terminals’ Ca2+-responses to a dynamically narrowing black-and-white grid that moves in different directions shows L2 monopolar cells generating hyperacute (<5°; cf. Figure 2B-C,E) responses along the same microsaccade movement axis (coloured arrows) of the superpositioned photoreceptors that feed information to them (cf. Figure 4). (C) In a visual learning experiment, a tethered, head-immobilised Drosophila flies in a flight simulator. The fly was positioned at the centre of a panoramic arena to prevent it from perceiving motion parallax cues[15]. The arena features two hyperacute dots placed 180° apart and two equally sized 3D pins positioned perpendicular to the dots. The fly generates subtle yaw torque signals to indicate its intention to turn left or right, allowing it to explore the visual objects within the arena. These signals are used to rotate the arena in the opposite direction of the fly’s intended turns, establishing a synchronised feedback loop. During the training phase, a heat punishment signal is associated with either the dot or 3D pin stimulus, smaller than an ommatidial pixel at this distance, delivered through an infrared laser. After training, without any heat punishment, the extent to which the fly has learned to avoid the tested stimulus is measured. Flies with normal binocular vision (above) exhibit significant learning scores, indicating their ability to see the dots and the pins as different objects. However, flies with monocular vision (one eye painted black, middle) or mutants that exhibit lateral photoreceptor microsaccades only in one eye (below) cannot learn this task. These results show that Drosophila has hyperacute stereovision[15]. Notably, this flight simulator-based setup did not allow simultaneous monitoring of photoreceptor microsaccades and whole-retina movements, both likely crucial to Drosophila stereovision and the observed visual behaviours. Data are modified from the cited papers.
Figure 6. Stochasticity and variations in the ommatidial photoreceptor grid structure and function combat spatiotemporal aliasing in morphodynamic information sampling and processing. (A) Drosophila R1-R7/8 photoreceptors are differently sized and asymmetrically positioned[13,15], forming different numbers of synapses with interneurons[3] (L1-L4). Moreover, R7y and R7p receptors’ colour sensitivity[115] establishes a random-like sampling matrix, consistent with anti-aliasing sampling[13,116]. The inset shows similar randomisation for the macaque retina[117] (red, green and blue cones) (B) Demonstration of how a random sampling matrix eliminates aliasing[13]. An original sin(x2 + y2) image in 0.1 resolution. Under-sampling this image with 0.2 resolution by a regular sampling matrix leads to aliasing: ghost rings appear (pink square), which the nervous system cannot differentiate from the original real image. Sampling the original image with a 0.2 resolution random matrix loses some of its fine resolution due to broadband noise, but sampling is aliasing-free. (C) In the flight simulator optomotor paradigm, a tethered head-fixed Drosophila robustly responds to hyperacute stimuli (tested from ~0.5° to ~4° wavelengths) for different velocities (tested from 30°/s to 500°/s). However, flies show a response reversal to 45°/s rotating 6.4°-stripe panorama. In contrast, monocular flies, with one eye painted black, do not reverse their optomotor responses, indicating that the reversal response is not induced by spatial aliasing[15]. Notably, this flight simulator-based setup did not allow for the simultaneous monitoring of photoreceptor microsaccades and whole-retina movements, both of which must contribute to the flies’ optomotor behaviour. (D) The compound eyes’ active stereo information sampling integrates body, head movements and global retina movements with local photomechanical photoreceptor microsaccades. Data are modified from the cited papers.
Figure 6. Stochasticity and variations in the ommatidial photoreceptor grid structure and function combat spatiotemporal aliasing in morphodynamic information sampling and processing. (A) Drosophila R1-R7/8 photoreceptors are differently sized and asymmetrically positioned[13,15], forming different numbers of synapses with interneurons[3] (L1-L4). Moreover, R7y and R7p receptors’ colour sensitivity[115] establishes a random-like sampling matrix, consistent with anti-aliasing sampling[13,116]. The inset shows similar randomisation for the macaque retina[117] (red, green and blue cones) (B) Demonstration of how a random sampling matrix eliminates aliasing[13]. An original sin(x2 + y2) image in 0.1 resolution. Under-sampling this image with 0.2 resolution by a regular sampling matrix leads to aliasing: ghost rings appear (pink square), which the nervous system cannot differentiate from the original real image. Sampling the original image with a 0.2 resolution random matrix loses some of its fine resolution due to broadband noise, but sampling is aliasing-free. (C) In the flight simulator optomotor paradigm, a tethered head-fixed Drosophila robustly responds to hyperacute stimuli (tested from ~0.5° to ~4° wavelengths) for different velocities (tested from 30°/s to 500°/s). However, flies show a response reversal to 45°/s rotating 6.4°-stripe panorama. In contrast, monocular flies, with one eye painted black, do not reverse their optomotor responses, indicating that the reversal response is not induced by spatial aliasing[15]. Notably, this flight simulator-based setup did not allow for the simultaneous monitoring of photoreceptor microsaccades and whole-retina movements, both of which must contribute to the flies’ optomotor behaviour. (D) The compound eyes’ active stereo information sampling integrates body, head movements and global retina movements with local photomechanical photoreceptor microsaccades. Data are modified from the cited papers.
Preprints 117016 g007
Figure 7. Pre- and postsynaptic morphodynamic sampling adapt to optimise information allocation in neural channels. (A) Adaptation enhances sensory information flow over time. R1–6 photoreceptor (above) and LMC voltage responses (below), as recorded intracellularly from Drosophila compound eyes in vivo, to a repeated naturalistic stimulus pattern, NS[45]. The recordings show how these neurons’ information allocation changes over time (for 1st, 2nd and 20th s of stimulation). The LMC voltage modulation grows rapidly over time, whereas the photoreceptor output changes less, indicating that most adaptation in the phototransduction occurs within the first second. Between these traces are their probability and the joint probability density functions (“hot” colours denote high probability). Notably, the mean synaptic gain increases dynamically as presented by the shape of join probability; white lines highlight its steepening slope during repetitive NS[45]. (B) LMC output sensitises dynamically[45]: its probability density flattens and widens over time (arrows; from blue to green), causing a time-dependent upwards trend in standard deviation (SD). Simultaneously, its frequency distribution changes. Because both its low- (up arrow) and high-frequency (up right) content increases while R1-6 output is less affected, the synapse allocates information more evenly within the LMC bandwidth over time. (C) Left: Signal-to-noise ratio (SNR) of Drosophila R1-6 photoreceptor responses to 20 Hz (red), 100 Hz (yellow), and 500 Hz (blue) saccade-like contrast bursts[13]. SNR increases with contrast (right) and reaches its maximum value (~6,000) for 20 Hz bursts (red, left), while 100 Hz bursts (yellow) exhibit the broadest frequency range. Right: Information transfer rate comparisons between photoreceptor recordings and stochastic model simulations for saccadic light bursts and Gaussian white noise stimuli of varying bandwidths[13]. The estimated information rates from both recorded and simulated data closely correspond across the entire range of encoding tested. This indicates that the morphodynamic refractory sampling (as performed by 30,000 microvilli) generates the most information-rich responses to saccadic burst stimulation. (D) Adaptation to repetitive naturalistic stimulation shows phasic scale-invariance to pattern speed. 10,000 points-long naturalistic stimulus sequence (NS) was presented and repeated at different playback velocities, lasting from 20 s (0.5 kHz) to 333 ms (30kHz)[45]. The corresponding intracellular photoreceptor (top trace) and LMC (middle trace) voltage responses are shown. The coloured sections highlight stimulus-specific playback velocities used during continuous recording. (E) The time-normalised shapes of the photoreceptor (above) and LMC (below) responses depict similar aspects of the stimulus, regardless of the playback velocity used (ranging from 0.5 to 30 kHz)[45]. The changes in the naturalistic stimulus speed, which follow the time-scale invariance of 1/f statistics, maintain the power within the frequency range of LMC responses relatively consistent. Consequently, LMCs can integrate similar size responses (contrast constancy) for the same stimulus pattern, irrespective of its speed[45]. These responses are predicted to drive generation of self-similar (scalable) action potential representations of the visual stimuli in central neurons. Data are modified from the cited papers.
Figure 7. Pre- and postsynaptic morphodynamic sampling adapt to optimise information allocation in neural channels. (A) Adaptation enhances sensory information flow over time. R1–6 photoreceptor (above) and LMC voltage responses (below), as recorded intracellularly from Drosophila compound eyes in vivo, to a repeated naturalistic stimulus pattern, NS[45]. The recordings show how these neurons’ information allocation changes over time (for 1st, 2nd and 20th s of stimulation). The LMC voltage modulation grows rapidly over time, whereas the photoreceptor output changes less, indicating that most adaptation in the phototransduction occurs within the first second. Between these traces are their probability and the joint probability density functions (“hot” colours denote high probability). Notably, the mean synaptic gain increases dynamically as presented by the shape of join probability; white lines highlight its steepening slope during repetitive NS[45]. (B) LMC output sensitises dynamically[45]: its probability density flattens and widens over time (arrows; from blue to green), causing a time-dependent upwards trend in standard deviation (SD). Simultaneously, its frequency distribution changes. Because both its low- (up arrow) and high-frequency (up right) content increases while R1-6 output is less affected, the synapse allocates information more evenly within the LMC bandwidth over time. (C) Left: Signal-to-noise ratio (SNR) of Drosophila R1-6 photoreceptor responses to 20 Hz (red), 100 Hz (yellow), and 500 Hz (blue) saccade-like contrast bursts[13]. SNR increases with contrast (right) and reaches its maximum value (~6,000) for 20 Hz bursts (red, left), while 100 Hz bursts (yellow) exhibit the broadest frequency range. Right: Information transfer rate comparisons between photoreceptor recordings and stochastic model simulations for saccadic light bursts and Gaussian white noise stimuli of varying bandwidths[13]. The estimated information rates from both recorded and simulated data closely correspond across the entire range of encoding tested. This indicates that the morphodynamic refractory sampling (as performed by 30,000 microvilli) generates the most information-rich responses to saccadic burst stimulation. (D) Adaptation to repetitive naturalistic stimulation shows phasic scale-invariance to pattern speed. 10,000 points-long naturalistic stimulus sequence (NS) was presented and repeated at different playback velocities, lasting from 20 s (0.5 kHz) to 333 ms (30kHz)[45]. The corresponding intracellular photoreceptor (top trace) and LMC (middle trace) voltage responses are shown. The coloured sections highlight stimulus-specific playback velocities used during continuous recording. (E) The time-normalised shapes of the photoreceptor (above) and LMC (below) responses depict similar aspects of the stimulus, regardless of the playback velocity used (ranging from 0.5 to 30 kHz)[45]. The changes in the naturalistic stimulus speed, which follow the time-scale invariance of 1/f statistics, maintain the power within the frequency range of LMC responses relatively consistent. Consequently, LMCs can integrate similar size responses (contrast constancy) for the same stimulus pattern, irrespective of its speed[45]. These responses are predicted to drive generation of self-similar (scalable) action potential representations of the visual stimuli in central neurons. Data are modified from the cited papers.
Preprints 117016 g008
Figure 8. Synchronised minimal delay brain activity. (A) A Drosophila has three electrodes inserted into its brain: right (E1) and left (E2) lobula/lobula plate optic lobes and reference (Ref). It flies in a flight simulator seeing identical scenes of black and white stripes on its left and right[64]. When the scenes are still, the fly flies straight, and the right and left optic lobes show little activity; only a sporadic spike and the local field potentials (LFPs) are flat (E2, blue; E1, red traces). When the scenes start to sweep to the opposing directions, it takes less than 20 ms (yellow bar) for the optic lobes to respond to these visual stimuli (first spikes, and dips in LFPs). Interestingly, separate intracellular photoreceptor and large monopolar cell (LMC) recordings to 10 ms light pulse shows comparable time delays, peaking on average at 15 ms and 10 ms, respectively. Given that lobula and lobula plate neurons, which generate the observed spike and LFP patterns, are at least three synapses away from photoreceptors, the neural responses at different processing layers (retina, lamina, lobula plate) are closely synchronised, indicating minimal delays. Even though the fly brain has already received the visual information about the moving scenes, the fly makes little adjustments in its flight path, and the yaw torque remains flat. Only after minimum of 210 ms of stimulation, the fly finally chooses the left stimulus by attempting to turn left (dotted line), seen as intensifying yaw torque (downward). (B) Brief high-intensity X-ray pulses activate Drosophila photoreceptors[15], causing photomechanical photoreceptor microsaccades across the eyes (characteristic retina movement). Virtually simultaneously, also other parts of the brain move, shown for lamina, Medulla and Central brain. (C) During 2-photon imaging, L2-monopolar cell terminals can show mechanical jitter (grey noisy trace) that is synchronised with moving stimulus[15] (vertical stripes). (D) Drosophila brain networks likely utilise multiple synchronised morphodynamic neural pathways to integrate a continuously adjusted, combinatorial, and distributed neural representation of a lemon, leading to its coherent and distinct object perception. Data are modified from the cited papers.
Figure 8. Synchronised minimal delay brain activity. (A) A Drosophila has three electrodes inserted into its brain: right (E1) and left (E2) lobula/lobula plate optic lobes and reference (Ref). It flies in a flight simulator seeing identical scenes of black and white stripes on its left and right[64]. When the scenes are still, the fly flies straight, and the right and left optic lobes show little activity; only a sporadic spike and the local field potentials (LFPs) are flat (E2, blue; E1, red traces). When the scenes start to sweep to the opposing directions, it takes less than 20 ms (yellow bar) for the optic lobes to respond to these visual stimuli (first spikes, and dips in LFPs). Interestingly, separate intracellular photoreceptor and large monopolar cell (LMC) recordings to 10 ms light pulse shows comparable time delays, peaking on average at 15 ms and 10 ms, respectively. Given that lobula and lobula plate neurons, which generate the observed spike and LFP patterns, are at least three synapses away from photoreceptors, the neural responses at different processing layers (retina, lamina, lobula plate) are closely synchronised, indicating minimal delays. Even though the fly brain has already received the visual information about the moving scenes, the fly makes little adjustments in its flight path, and the yaw torque remains flat. Only after minimum of 210 ms of stimulation, the fly finally chooses the left stimulus by attempting to turn left (dotted line), seen as intensifying yaw torque (downward). (B) Brief high-intensity X-ray pulses activate Drosophila photoreceptors[15], causing photomechanical photoreceptor microsaccades across the eyes (characteristic retina movement). Virtually simultaneously, also other parts of the brain move, shown for lamina, Medulla and Central brain. (C) During 2-photon imaging, L2-monopolar cell terminals can show mechanical jitter (grey noisy trace) that is synchronised with moving stimulus[15] (vertical stripes). (D) Drosophila brain networks likely utilise multiple synchronised morphodynamic neural pathways to integrate a continuously adjusted, combinatorial, and distributed neural representation of a lemon, leading to its coherent and distinct object perception. Data are modified from the cited papers.
Preprints 117016 g009

2.1. Photoreceptor Microsaccades Enhance Vision

Intricate experiments (Figure 2A) have revealed that photoreceptors rapidly move in response to light intensity changes[13,14,15]. Referred to as high-speed photomechanical microsaccades[13,15], these movements, which resemble a complex piston motion (Figure 2B), occur in less than 100 milliseconds and involve simultaneous axial recoil and lateral swinging of the photoreceptors within a single ommatidium. These local morphodynamics result in adaptive optics (Figure 2C), enhancing spatial sampling resolution and sharpening moving light patterns over time by narrowing and shifting the photoreceptors’ receptive fields[13,15].
To understand the core concept and its impact on compound eye vision, let us compare the photoreceptors’ receptive fields to image pixels in a digital camera (Figure 2E). Imagine shifting the camera sensor, capturing two consecutive images with a 1/2-pixel displacement. This movement effectively doubles the spatial image information. By integrating these two images over time, the resolution is significantly improved. However, if a pixel moves even further, it eventually merges with its neighbouring pixel (provided the neighbouring pixel remains still and does not detect changes in light). As a result of this complete pixel fusion, the acuity decreases since the resulting neural image will contain fewer pixels. Therefore, by restricting photoreceptors’ micro-scanning to the interommatidial angle, Drosophila can effectively time-integrate a neural image that exceeds the optical limits of its compound eyes.

2.2. Microsaccades Are Photomechanical Adaptations in Phototransduction

Drosophila photoreceptors exhibit a distinctive toothbrush-like morphology characterised by their “bristled” light-sensitive structures known as rhabdomeres. In the outer photoreceptors (R1-6), there are approximately 30,000 bristles, called microvilli, which act as photon sampling units (Figure 2D) [8,13,15]. These microvilli collectively function as a waveguide, capturing light information across the photoreceptor’s receptive field[14,15]. Each microvillus compartmentalises the complete set of phototransduction cascade reactions[61], contributing to the refractive index and waveguide properties of the rhabdomere[77]. The phototransduction reactions within rhabdomeric microvilli of insect photoreceptors generate ultra-fast contractions of the whole rhabdomere caused by the PLC-mediated cleavage of PIP2 headgroups (InsP3) from the microvillar membrane[8,61]. These photomechanics rapidly adjust the photoreceptor, enabling it to dynamically adapt its light input as the receptive field reshapes and interacts with the surrounding environment. Because photoreceptor microsaccades directly result from phototransduction reactions [8,13,15,61], they are an inevitable consequence of compound eye vision. Without microsaccades, insects with microvillar photoreceptors would be blind [8,13,15,61].
Insects possess an impressively rapid vision, operating approximately 3 to 15 times faster than our own. This remarkable ability stems from the microvilli’s swift conversion of captured photons into brief unitary responses (Figure 2D; also known as quantum bumps[61]) and their ability to generate photomechanical micromovements[8,13] (Figure 2C). Moreover, the size and speed of microsaccades adapt to the microvilli population’s refractory photon sampling dynamics[13,76] (Figure 2D). As light intensity increases, both the quantum efficiency and duration of photoreceptors’ quantum bumps decrease[76,78], resulting in more transient microsaccades[13,15]. These adaptations extend the dynamic range of vision[76,79] and enhance the detection of environmental contrast changes[13,80], making visual objects highly noticeable under various lighting conditions. Consequently, Drosophila can perceive moving objects across a wide range of velocities and light intensities, surpassing the resolution limits of the static eye’s pixelation by 4-10 times (Figure 2E; the average inter-ommatidial angle, φ ≈ 5°)[13,15].
Morphodynamic adaptations involving photoreceptor microvilli play a crucial role in insect vision by enabling rapid and efficient visual information processing. These adaptations lead to contrast-normalised (Figure 2D) and more phasic photoreceptor responses, achieved through significantly reduced integration time [13,80,81]. Evolution further refines these dynamics to match species-specific visual needs (Figure 2E). For example, honeybee microsaccades are smaller than those of Drosophila[14], corresponding to the positioning of honeybee photoreceptors farther away from the ommatidium lenses. Consequently, reducing the receptive field size and interommatidial angles in honeybees is likely an adaptation that allows optimal image resolution during scanning[14]. Similarly, fast-flying flies such as houseflies and blowflies, characterised by a higher density of ommatidia in their eyes, are expected to exhibit smaller and faster photoreceptor microsaccades compared to slower-flying Drosophila with fewer and less densely packed ommatidia[15]. This adaptation enables the fast-flying flies to capture visual information with higher velocity[76,80,82,83] and resolution, albeit at a higher metabolic cost[80].

2.3. Microsaccades Maximise Information during Saccadic Behaviours

Photoreceptors’ microsaccadic sampling likely evolved to align with animals’ saccadic behaviours, maximising visual information capture[13,15]. Saccades are utilised by insects and humans to explore their environment (Figure 1I–L), followed by fixation periods where the gaze remains relatively still[34]. Previously, it was believed that detailed information was only sampled during fixation, as photoreceptors were thought to have slow integration times, causing image blurring during saccades[34]. However, fixation intervals can lead to perceptual fading through powerful adaptation, reducing visual information and potentially limiting perception to average light levels [13,84,85]. Therefore, to maximise information capture, fixation durations and saccade speeds should dynamically adapt to the statistical properties of the natural environment[13]. This sampling strategy would enable animals to efficiently adjust their behavioural speed and movement patterns in diverse environments, optimising vision - for example, moving slowly in darkness and faster in daylight[13].
To investigate this theory, researchers studied the body yaw velocities of walking fruit flies[67] to sample light intensity information from natural images[13] (Figure 3). They found that saccadic viewing of these images improved the photoreceptors’ information capture compared to linear or shuffled velocity walks[13]. This improvement was attributed to saccadic viewing generating bursty high-contrast stimulation, maximising the photoreceptors’ ability to gather information. Specifically, the photomechanical and refractory phototransduction reactions of Drosophila R1-6 photoreceptors, associated with motion vision[86], were found to be finely tuned to saccadic behaviour for sampling quantal light information, enabling them to capture 2-to-4-times more information in a given time compared to previous estimates[13,78].
Further analysis, utilising multiscale biophysical modelling[81], investigated the stochastic refractory photon sampling by 30,000 microvilli[13]. For readers interested in more details, Text Box 2 graphically illustrates the basic principles of stochastic quantal refractory sampling. The findings revealed that the improved information capture during saccadic viewing can be attributed to the interspersed fixation intervals[13,79]. When fixating on darker objects, which alleviates microvilli refractoriness, photoreceptors can sample more information from transient light changes, capturing larger photon rate variations[13]. The combined effect of photomechanical photoreceptor movements and refractory sampling worked synergistically to enhance spatial acuity, reduce motion blur during saccades, facilitate adaptation during gaze fixation, and emphasise instances when visual objects crossed a photoreceptor’s receptive field. Consequently, the encoding of high-resolution spatial information was achieved through the temporal mechanisms induced by physical motion[13].
Text Box 2. Visualising Refractory Quantal Computations. By utilising powerful multi-scale morphodynamic neural models[13,15,76], we can predict and analyse the generation and integration of voltage responses during morphodynamic quantal refractory sampling and compare these simulations to actual intracellular recordings for similar stimulation[13,15,76]. This approach, combined with information-theoretical analyses[76,80,88,89], allows us to explain how phasic response waveforms arise from ultrafast movements and estimate the signal-to-noise ratio and information transfer rate of the neural responses. Importantly, these methods are applicable for studying the morphodynamic functions of any neural circuit. To illustrate the analytic power of this approach, we present a simple example: an intracellular recording (whole-cell voltage response) of dark-adapted Drosophila photoreceptors (C) to a bright light pulse. See also Figure 2D which shows morphodynamic simulations of how a photoreceptor responds to two dots crossing its receptive field from east to west and west to east directions.
Text Box 2. Visualising Refractory Quantal Computations. By utilising powerful multi-scale morphodynamic neural models[13,15,76], we can predict and analyse the generation and integration of voltage responses during morphodynamic quantal refractory sampling and compare these simulations to actual intracellular recordings for similar stimulation[13,15,76]. This approach, combined with information-theoretical analyses[76,80,88,89], allows us to explain how phasic response waveforms arise from ultrafast movements and estimate the signal-to-noise ratio and information transfer rate of the neural responses. Importantly, these methods are applicable for studying the morphodynamic functions of any neural circuit. To illustrate the analytic power of this approach, we present a simple example: an intracellular recording (whole-cell voltage response) of dark-adapted Drosophila photoreceptors (C) to a bright light pulse. See also Figure 2D which shows morphodynamic simulations of how a photoreceptor responds to two dots crossing its receptive field from east to west and west to east directions.
Preprints 117016 g010
An insect photoreceptor’s sampling units – e.g., 30,000 microvilli in a fruit fly or 90,000 in a blowfly R1-6 - count photons as variable samples (quantum bumps) and sum these up into a macroscopic voltage response, generating a reliable estimate of the encountered light stimulus. For clarity, visualise the light pulse as a consistent flow of photons, or golden balls, over time (A). The quantum bumps that the photons elicit in individual microvilli can be thought of as silver coins of various sizes (B). The photoreceptor persistently counts these “coins” produced by its microvilli, thus generating a dynamically changing macroscopic response (C, depicted as a blue trace). These basic counting rules[90] shape the photoreceptor response:
  • Each microvillus can produce only one quantum bump at a time [76,91,92,93].
  • After producing a quantum bump, a microvillus becomes refractory for up to 300 ms (in Drosophila R1-6 photoreceptors at 25°C) and cannot respond to other photons[91,94,95].
  • Quantum bumps from all microvilli sum up the macroscopic response[76,91,92,93,96].
  • Microvilli availability sets a photoreceptor’s maximum sample rate (quantum bump production rate), adapting its macroscopic response to a light stimulus[76,93].
  • Global Ca2+ accumulation and membrane voltage affect samples of all microvilli. These global feedbacks strengthen with brightening light to reduce the size and duration of quantum bumps, adapting the macroscopic response[78,88,97,98].
Adaptation in macroscopic response (C) to continuous light (A) is mostly caused by a reduction in the number and size of quantum bumps over time (B). When the stimulus starts, a large portion of the microvilli is simultaneously activated (A i and B i), but they subsequently enter a refractory state (A ii and B ii). This means that a smaller fraction of microvilli can respond to the following photons in the stimulus until more microvilli become available again. As a result, the number of activated microvilli initially peaks and then rapidly declines, eventually settling into a steady state (A iii and B iii) as the balance between photon arrivals and refractory periods is achieved. If all quantum bumps were identical, the macroscopic current would simply reflect the number of activated microvilli based on the photon rate, resulting in a steady-state response. Light-induced current also exhibits a decaying trend towards lower plateau levels. This is because quantum bumps adapt to brighter backgrounds (A iii and B iii), becoming smaller and briefer[78,88]. This adaptation is caused by global negative feedback, Ca2+-dependent inhibition of microvillar phototransduction reactions[97,98,99,100,101]. Additionally, the concurrent increase in membrane voltage compresses responses by reducing the electromotive force for the light-induced current across all microvilli[76]. Together, these adaptive dynamics enhance phasic photoreceptor responses, similar to encoding phase congruency[102].
The signal-to-noise ratio and rate of information transfer increase with the average sampling rate, which is the average number of samples per unit time. Thus, the more samples that make up the macroscopic response to a given light pattern, the higher its information transfer rate. However, with more photons being counted by a photoreceptor at brightening stimulation, information about saccadic light patterns of natural scenes in its responses first increases and then approaches a constant rate. This is because:
(a) When more microvilli are in a refractory state, more photons fail to generate quantum bumps. As quantum efficiency drops, the equilibrium between used and available microvilli approaches a constant (maximum) quantum bump production rate (sample rate). This process effectively performs division, scaling logarithmic changes in photon rates into macroscopic voltage responses with consistent size and waveforms, thereby maintaining contrast constancy[76,80,90].
(b) Once global Ca2+ and voltage feedbacks saturate, they cannot make quantum bumps any smaller and briefer with increasing brightness.
(c) After initial acceleration from the dark-adapted state, quantum bump latency distribution remains practically invariable in different light-adaptation states[88].
Therefore, when sample rate modulation (a) and sample integration dynamics (b and c) of the macroscopic voltage responses settle (at intensities >105 photons/s in Drosophila R1-6 photoreceptors, allocation of visual information in the photoreceptor’s amplitude and frequency range becomes nearly invariable[76,80,103]. Correspondingly, stochastic simulation closely predicts measured responses and rates of information transfer[76,79,80]. Notably, when the microvilli usage reaches a midpoint (~50 % level), the information rate encoded by the macroscopic responses to natural light intensity time series saturates[76]. This is presumably because sample rate modulation to light increments and decrements – which in the macroscopic response code for the number of different stimulus patterns[89] - saturate. Quantum bump size, if invariable, does not affect the information transfer rate – as long as the quantum bumps are briefer than the stimulus changes they encode. Thus, like any other filter, a fixed bump waveform affects signal and noise equally (Data Processing theorem[76,89]). But varying quantum bump size adds noise; when this variation is adaptive (memory-based), less noise is added[76,89].
In summary, insect photoreceptors count photons through microvilli, integrate the responses, and adapt their macroscopic response based on the basic counting rules and global feedback mechanisms. The information transfer rate increases with the average sampling rate but eventually reaches a constant rate as the brightness of the stimulus increases. The size of the quantum bumps affects noise levels, with adaptive variation reducing noise.
These discoveries underscore the crucial link between an animal’s adaptation in utilising movements across different scales, ranging from nanoscale molecular dynamics to microscopic brain morphodynamics, to maximise visual information capture and acuity[13]. The new understanding from the Drosophila studies is that contrary to popular assumptions, neither saccades[52] nor fixations[84] hinder the vision. Instead, they work together to enhance visual perception, highlighting the complementary nature of these active sampling movement patterns[13].

2.4. Left and Right Eyes’ Mirror-Symmetric Microsaccades Phase-Enhance Moving Objects

When Drosophila encounters moving objects in natural environments, its left and right eye photoreceptor pairs generate microsaccadic movements that synchronise their receptive field scanning in opposite directions (Figure 4)[13,15]. To quantitatively analyse these morphodynamics, researchers utilised a custom-designed high-speed microscope system[14], tailored explicitly for recording photoreceptor movements within insect compound eyes; an early prototype of this instrument is shown in Figure 2A, while Video 1 demonstrates these experiments. Using infrared illumination, which flies cannot see[14,15,104,105], the positions and orientations of photoreceptors in both eyes were measured, revealing mirror-symmetric angular orientations between the eyes and symmetry within each eye (Figure 4A). It was discovered that a single point in space within the frontal region, where receptive fields overlap (Figure 4B), is detected by at least 16 photoreceptors, eight in each[14,15]. This highly ordered mirror-symmetric rhabdomere organisation, leading to massively over-complete tiling of the eyes’ binocular visual fields[15] (Figure 4C), challenges the historical belief that small insect eyes, such as those of Drosophila, are optically too coarse and closely positioned to support stereovision[52].
By selectively stimulating the rhabdomeres with targeted light flashes, researchers determined the specific photomechanical contraction directions for each eye’s location (Figure 4D). Analysis of the resulting microsaccades enabled the construction of a 3D-vector map encompassing the frontal and peripheral areas of the eyes. These microsaccades exhibited mirror symmetry between the eyes and aligned with the rotation axis of the R1-R2-R3 photoreceptor of each ommatidium (Figure 4D, left), indicating that the photoreceptors’ movement directions were predetermined during development (Figure 4A)[14,15]. Strikingly, the 3D-vector map representing the movements of the corresponding photoreceptor receptive fields (Figure 4D) coincides with the optic flow-field generated by the fly’s forward thrust (Figure 4E)[14,15]. This alignment provides microsaccade-enhanced detection and resolution of moving objects (cf. Figure 4C) across the extensive visual fields of the eyes (approximately 360°), suggesting an evolutionary optimisation of fly vision for this intriguing capability.
The microsaccadic receptive field movements comprise a fast phase (Figure 4D, left) aligned with the flow-field direction during light-on (Figure 4D, middle), followed by a slower phase in the opposite direction during light-off (Figure 4D, right). When a fly is in forward flight with an upright head (Figure 5E, left and middle), the fast and slow phases reach equilibrium (Figure 4E, right). The fast phase represents “ground-flow,” while the slower phase represents “sky-flow.” In the presence of real-world structures, locomotion enhances sampling through a push-pull mechanism. Photoreceptors transition between fast and slow phases, thereby collectively improving neural resolution over time[15] (Figure 2C). Fast microsaccades are expected to aid in resolving intricate visual clutter, whereas slower microsaccades enhance the perception of the surrounding landscape and sky[15]. Moreover, this eye-location-dependent orientation-tuned bidirectional visual object enhancement makes any moving object deviating from the prevailing self-motion-induced optic flow field stand out. Insect brains likely utilise the resulting phasic neural image contrast differences to detect or track predator movements or conspecifics across the eyes’ visual fields. For example, this mechanism could help a honeybee drone spot and track the queen amidst a competing drone swarm[107], enabling efficient approach and social interaction.
Rotation (yaw) (Figure 4F, left and middle) further enhances binocular contrasts (Figure 4F, right), with one eye’s phases synchronised with field rotation while the other eye’s phases exhibit the reverse pattern[15]. Many insects, including bees and wasps, engage in elaborately patterned learning or homing flights, involving fast saccadic turns and bursty repetitive wave-like scanning motion when leaving their nest or food sources[108,109] (Figure 4G). Given the mirror-symmetricity and ultrafast photoreceptor microsaccades of bee eyes[14], these flight patterns are expected to drive enhanced binocular detection of behaviourally relevant objects, landmarks, and patterns, utilising the phasic differences in microsaccadic visual information sampling between the two eyes[13,15]. Thus, learning flight behaviours might make effective use of optic-flow-tuned and morphodynamically enhanced binocular vision, enabling insects to navigate and return to their desired locations successfully.

2.5. Mirror-Symmetric Microsaccades Enable Hyperacute Stereovision

Crucially, Drosophila uses mirror-symmetric microsaccades to sample the three-dimensional visual world, enabling the extraction of depth information (Figure 5). This process entails comparing the resulting morphodynamically sampled neural images from its left and right eye photoreceptors[15]. The disparities in x- and y-coordinates between corresponding “pixels” provide insights into scene depth. In response to light intensity changes, the left and right eye photoreceptors contract mirror-symmetrically, narrowing and sliding their receptive fields in opposing directions, thus shaping neural responses (Figure 5A; also see Figure 2C)[13,15]. By cross-correlating these photomechanical responses between neighbouring ommatidia, the Drosophila brain is predicted to achieve a reliable stereovision range spanning from less than 1 mm to approximately 150 mm[15]. The crucial aspect lies in utilising the responses’ phase differences as temporal cues for perceiving 3D space (Figure 5A,B). Furthermore, researchers assessed if a static Drosophila eye model with immobile photoreceptors could discern depth[15]. These calculations indicate that the lack of scanning activity by the immobile photoreceptors and the small distance between the eye (Figure 5A, k = 440 μm) would only enable a significantly reduced depth perception range, underlining the physical and evolutionary advantages of moving photoreceptors in depth perception.
Furthermore, optical calculations using the Fourier beam propagation[15,110] - which models in reverse how light beams pass through the photoreceptor rhabdomeres and the ommatidium lens into the visual space - have confirmed and expanded upon Pick’s earlier and often overlooked discovery[63]. This analysis reveals that the receptive fields of R1-6 photoreceptors from neighbouring fly ommatidia, which feed information to the first visual interneurons (Large Monopolar Cells, LMCs), do not overlap perfectly. Instead, due to variations in the sizes of R1-6 rhabdomeres, their distances from the ommatidial centre, and the non-spherical shape of the eye, their receptive fields tile a small area in the visual space over-completely in neural superposition[15,63] (Figure 5B; see also Figure 4C). In living flies, this situation becomes more complex and interesting as these receptive fields move and narrow independently, as illustrated through computer simulations in Video 2, following the morphodynamic rules of their photoreceptor microsaccades[14,15] (Figure 5B and Text Box 1A). Consequently, this coordinated morphodynamic sampling motion is reflected in the orientation-sensitive hyperacute LMC responses, as observed in high-speed calcium imaging of L2 monopolar cell axon terminals[15] (Figure 5B).
Behavioural experiments in a flight simulator verified that Drosophila possesses hyperacute stereovision[15] (Figure 5C). Tethered head-fixed flies were presented with 1-4° 3D and 2D objects, smaller than their eyes’ average interommatidial angle (cf. Figure 2E). Notably, the flies exhibited a preference for fixating on the minute 3D objects, providing support for the new morphodynamic sampling theory of hyperacute stereovision.
In subsequent learning experiments, the flies underwent training to avoid specific stimuli, successfully showing the ability to discriminate between small (<< 4°) equal-contrast 3D and 2D objects. Interestingly, because of their immobilised heads, flies could not rely on motion parallax signals during learning, meaning the discrimination relied solely on the eyes’ image disparity signals. Flies with one eye painted over failed to learn the stimuli. Moreover, it was discovered that rescuing R1-6 or R7/R8 photoreceptors in blind norpAP24 mutants made their microsaccades’ lateral (sideways) component more vulnerable to mechanical stress or developmental issues, with ∼10% of these mutants displaying microsaccades only monocularly[15]. However, both eyes showed a characteristic electroretinogram response, indicating intact phototransduction and axial microsaccade movement. Flies with normal lateral microsaccades learned to distinguish hyperacute 3D pins from 2D dots and the standard 2D T vs. Ʇ patterns, though less effectively than wild-type flies, showing that R1-6 input suffices for hyperacute stereovision but that R7/R8s also play a role. Conversely, mutants with monocular sideways microsaccades failed to learn 3D objects or 2D patterns, indicating that misaligned binocular sampling impairs 3D perception and learning. R7/R8 rescued norpAP24 and ninaE8 mutants confirmed that inner photoreceptors contribute to hyperacute stereopsis.
These results firmly establish the significance of binocular mirror-symmetric photoreceptor microsaccades in sampling 3D information and that both R1-6 (associated with motion vision[86]) and R7/R8 (associated with colour vision[112]) photoreceptor classes contribute to hyperacute stereopsis. The findings provide compelling evidence that mirror-symmetric microsaccadic sampling, as a form of ultrafast neural morphodynamics, is necessary for hyperacute stereovision in Drosophila[15].

2.6. Microsaccade Variability Combats Aliasing

The heterogeneous nature of the fly’s retinal sampling matrix - characterised by varying rhabdomere sizes[13], random distributions of visual pigments[113], variations in photoreceptor synapse numbers[3] (Figure 6A), the stochastic integration of single photon responses[76,78,88] (quantum bumps)[13,79,81] and stochastic variability in microsaccade waveforms[13,14,15] - eliminates spatiotemporal aliasing[13,15] (Figure 6B), enabling reliable transmission of visual information. This reliable encoding from variable samples aligns with the earlier examples (Figure 2C–D, Figure 3D and Figure 5B) and touches on Francis Galton’s idea of vox populi[114]: “The mean of variable samples, reported independently by honest observers, provides the best estimate of the witnessed event”[90]. Consequently, the morphodynamic information sampling theory[13,15] challenges previous assumptions of static compound eyes[52], which suggested that the ommatidial grid of immobile photoreceptors structurally limits spatial resolution, rendering the eyes susceptible to undersampling the visual world and prone to aliasing[52].
Supporting the new morphodynamic theory[13,15], tethered head-fixed Drosophila exhibit robust optomotor behaviour in a flight simulator system (Figure 6C). The flies generated yaw torque responses, represented by the blue traces, indicating their intention to follow the left or right turns of the stripe panorama. These responses are believed to be a manifestation of an innate visuomotor reflex aimed at minimising retinal image slippage[52,118]. Consistent with Drosophila’s hyperacute ability to differentiate small 3D and 2D objects[15], as shown earlier in Figure 5C, the tested flies reliably responded to rotating panoramic black-and-white stripe scenes with hyperacute resolution, tested down to 0.5° resolution[13,15]. This resolution is about ten times finer than the eyes’ average interommatidial angle (cf. Figure 2E), significantly surpassing the explanatory power of the traditional static compound eye theory[52], which predicts 4°-5° maximum resolvability.
However, when exposed to slowly rotating 6.4-10° black-and-white stripe waveforms, a head-fixed tethered Drosophila displays reversals in its optomotor flight behaviour[15] (Figure 6C). Previously, this optomotor reversal was thought to result from the static ommatidial grid spatially aliasing the sampled panoramic stripe pattern due to the stimulus wavelength being approximately twice the eyes’ average interommatidial angle. Upon further analysis, the previous interpretation of these reversals as a sign of aliasing[35,52] is contested. Optomotor reversals primarily occur at 40-60°/s stimulus velocities, matching the speed of the left and right eyes’ mirror-symmetric photoreceptor microsaccades[15] (Figure 6C; cf. Figure 2C). As a result, one eye’s moving photoreceptors are more likely to be locked onto the rotating scene than those in the other eye, which move against the stimulus rotation. This discrepancy creates an imbalance that the fly’s brain may interpret as the stimulus rotating in the opposite direction[15].
Notably, the optomotor behaviour returns to normal when the tested fly has monocular vision (with one eye covered) and during faster rotations[15] or finer stripe pattern waveforms[13,15] (Figure 6C). Therefore, the abnormal optomotor reversal, which arises under somewhat abnormal and specific stimulus conditions when tested with head-fixed and position-constrained flies, must reflect high-order processing of binocular information and cannot be attributed to spatial sampling aliasing that is velocity and left-vs-right eye independent[15].

2.7. Multiple Layers of Active Sampling vs Simple Motion Detection Models

In addition to photoreceptor microsaccades, insects possess intraocular muscles capable of orchestrating coordinated oscillations of the entire photoreceptor array, encompassing the entire retina[13,15,35,119] (Figure 6D). This global motion has been proposed as a means to achieve super-resolution[120,121], but not for stereopsis. While the muscle movements alter the light patterns reaching the eyes, leading to the occurrence of photoreceptor microsaccades, it is the combination of local microsaccades and global retina movements, which include any body and head movements[33,55,109,122] (Figure 6D), that collectively govern the active sampling of stereoscopic information by the eyes[13,14,15].
The Drosophila brain effectively integrates depth and motion computations using mirror-symmetrically moving image disparity signals from its binocular vision channels[15]. During goal-oriented visual behaviours, coordinated muscle-induced vergence movements of the left and right retinae[35], a phenomenon also observed in larger flies walking on a trackball[36,37], likely further extend the stereo range by drawing bordering photoreceptors into the eye’s binocular region (cf. Text Box 1B iv). Interestingly, in fully immobilised Drosophila, which rarely shows these retinal movements[13,14,15], the photoreceptor microsaccade amplitudes characteristically fluctuate more during repeated light stimuli than the corresponding intracellular voltage responses[13,15]. This suggests that, in addition to retinal movements, the fly brain might exert top-down control over retinal muscle tone and tension, thereby modulating the lateral range of photomechanical microsaccades through retinal strain adjustments. This interaction between retinal muscles and photoreceptor microsaccades could ultimately facilitate attentive accommodation, allowing the fly to precisely focus its hyperacute gaze on specific visual targets, analogous to how vertebrate lens eyes use ciliary muscles to fine-tune focus[123].
Conversely, by maximally tensing or relaxing the retinal muscles - and thus the retinae - a fly might be able to fully suppress the microsaccades’ lateral movement, as suggested by studies involving optogenetic activation or genetic inactivation of retinal motor neurons[35]. While photomechanical microsaccades are robust and occur without muscle involvement, as observed in dissociated photoreceptors in a Petri dish[13], their lateral movement range can be physically constrained by increasing the stiffness of the surrounding medium. For example, in spam-mutant eyes, where the open rhabdom of R1-8 photoreceptors reverts to an ancestral fused rhabdom state[124,125], microsaccade kinematics are similar to those in wild-type photoreceptors, but their displacement range is reduced due to the increased structural stiffness of the fused rhabdom[14]. If the maximally tensing or relaxing of the retinal muscles were linked to top-down synaptic inhibition of photoreceptor signals - potentially mediated by centrifugal GABAergic C2/C3 fibres[126] from the brain that innervate the photoreceptors[3] - this centralised visual information suppression (“closing the eyes”) could serve to minimise environmental interference and the eyes’ energy consumption during sleep.
These findings and new ideas about fast and complex motion-based interactions in visual information sampling and processing challenge the traditional view that insect brains rely on low-resolution input from static eyes for high-order computations. For instance, the motion detection ideals of reduced input-output systems[86,127,128], such as Hassenstein-Reichardt[129] and Barlow-Levick[130] models, require updates to incorporate ultrafast morphodynamics[13]-[15], retinal muscle movements[35] and state-dependent synaptic processing[64,70,73,74,131]. The updates are crucial as these processes actively shape neural responses, perception, and behaviours[64], providing essential ingredients for hyperacute attentive 3D vision[13,15,48,50] and intrinsic decision-making[13,15,132,133,134] that occur in a closed-loop interaction with the changing environment.
Accumulating evidence, consistent with the idea that brains reduce uncertainty by synchronously integrating multisensory information[135], further suggests that object colour and shape information partially streams through the same channels previously thought to be solely for motion information[105]. Consequently, individual neurons within these channels should engage in multiple parallel processing tasks[105], adapting in a phasic and goal-oriented manner. These emerging concepts challenge oversimplified input-output models of insect vision, highlighting the importance of complex interactions between local ultrafast neural morphodynamics and global active vision strategies in perception and behaviour.

3. Benefits of Neural Morphodynamics

Organisms have adapted to the quantal nature of the dynamic physical world, resulting in ubiquitous active use of quantal morphodynamic processes and signalling within their constituent parts, as we highlighted in Figure 1 in the Introduction. Besides enhancing information sampling and flow in sensory systems for efficient perception[6,7,8,9,13,14,15,16], we propose that ultrafast neural morphodynamics likely evolved universally to facilitate effective communication across nervous systems[10,11,12,24]. By aligning with the moving world principles of thermodynamics and information theory[89,136,137], the evolution of nervous systems harnesses neural morphodynamics to optimise perception and behavioural capabilities, ensuring efficient adaptation to the ever-changing environment. The benefits of ultrafast morphodynamic neural processing are substantial and encompass the following:

3.1. Efficient Neural Encoding of Reliable Representations across Wide Dynamic Range

Neural communication through synapses relies on rapid pre- and postsynaptic ultrastructural movements[26] that facilitate efficient quantal release and capture of neurotransmitter molecules[20,23,24]. These processes share similarities with how photoreceptor microvilli have evolved to utilise photomechanical movements[13,14,15] with quantal refractory photon sampling[13,76,79] to maximise information in neural responses (e.g. Figure 3D). In both systems, ultrafast morphodynamics are employed with refractoriness to achieve highly accurate sampling of behaviourally or intrinsically relevant information by rapidly adapting their quantum efficiency to the influx of vastly varying sample (photon vs neurotransmitter molecule) rate changes[13,15,76,79].
In synaptic transmission (e.g. Figure 1D,E), presynaptic transmitter vesicles are actively transported to docking sites by molecular motors[19]. Within these sites, vesicle depletion occurs through ultrafast exocytosis, followed by replenishment via endocytosis[24]. These processes generate ultrastructural movements, vesicle queuing and refractory pauses[19]. Such movements and pauses occur as vesicle numbers, and potentially their sizes[20], adapt to sensory or neural information flow changes. Given that a spherical vesicle contains many neurotransmitter molecules with a high rate of release, the transmitter molecules, acting as carriers of information, can exhibit logarithmic changes from one moment to another. Conversely, the adaptive morphodynamic processes at the postsynaptic sites involve rapid movements of dendritic spines[11] (cf. Figure 1C) or transmitter-receptor complexes[19] (e.g. Figure 1E). These ultrastructural movements likely facilitate efficient sampling of information from the rapid changes in neurotransmitter concentration, enabling swift and precise integration of macroscopic responses from the sampled postsynaptic quanta[20,23].
Interestingly, specific circuits have evolved to integrate synchronous high signal-to-noise information from multiple adjacent pathways, thereby enhancing the speed and accuracy of phasic signals[20,23,25,45,138]. This mechanism is particularly beneficial for computationally challenging tasks, such as distinguishing object boundaries from the background during variable self-motion. For instance, in the photoreceptor-LMC synapse (Figure 7), the fly eye exhibits neural superposition wiring[111], allowing each LMC to simultaneously sample and integrate quantal events from six neighbouring photoreceptors (R1-6), driven by the morphodynamics detailed in Figure 5B and Video 2. Because the receptive fields of these photoreceptors only partially overlap and move in slightly different directions during microsaccades, each photoreceptor conveys a distinct phasic aspect of the visual stimulus to the LMCs[15] (L1-3; cf. Figure 6A that illustrates their synaptic dispersion). The LMCs actively differentiate these inputs, resulting in rapidly occurring phasic responses with notably high signal-to-noise ratios, particularly at high frequencies[20,23,25,45].
Moreover, in this system, coding efficiency improves dynamically by adaptation (Figure 7A), which swiftly flattens and widens the LMC’s amplitude and frequency distributions over time[20,23] (Figure 7B), improving sensitivity to under-represented signals within seconds. Such performance implies that LMCs strive to utilise their output range equally in different situations since a message in which every symbol is transmitted equally often has the highest information content[139]. Here, an LMC’s sensory information is maximised through pre- and postsynaptic morphodynamic processes, in which quantal refractory sampling jointly adapts to light stimulus changes, dynamically adjusting the synaptic gain (Figure 7A; see the R-LMC joint probability at each second of stimulation, where the slope of the white line indicates the dynamic gain change).
Comparable to LMCs, dynamic adapting scaling for information maximisation has been shown in blowfly H1 neurons’ action potential responses (spikes) to changing light stimulus velocities[140]. These neurons reside in the lobula plate optic lobe, deeper in the brain, at least three synapses away from LMCs. Therefore, it is possible that H1s’ adaptive dynamics partly project the earlier morphodynamic information sampling in the photoreceptor-LMC synapse or that adaptive rescaling is a general property of all neural systems[141,142]. Nevertheless, the continuously adapting weighted-average of the six variable photoreceptor responses reported independently to LMCs, combat noise and may carry the best (most accurate unbiased) running estimate of the ongoing light contrast signals. This dynamic maximisation of sensory information is distinct from Laughlin’s original concept of static contrast equalisation[143]. The latter is based on stationary image statistics of a limited range and necessitates an implausible static synaptic encoder system[45] that imposes a constant synaptic gain. Furthermore, Laughlin’s model does not address the issue of noise[45].
Thus, ultrafast morphodynamics actively shapes neurons’ macroscopic voltage response waveforms maximising information flow. These adaptive dynamics impact both the presynaptic quantal transmitter release and postsynaptic integration of the sampled quanta, influencing the underlying quantum bump size, latency, and refractory distributions[13]. Advantageously, intracellular microelectrode recordings in vivo provide a means to estimate these distributions statistically with high accuracy[20,23,78,88]. Knowing these distributions and the number of sampling units obtained from ultrastructural electron microscopy data, one can accurately predict neural responses and their information content for any stimulus patterns[23,78,79,88,89]. These 4-parameter quantal sampling models, which avoid the need for free parameters[13,15,76,80,81], have been experimentally validated[13,15,20,76,78,80,81,88,89] (Figure 7C), providing a biophysically realistic multiscale framework to understand the involved neural computations[13,81].
From a computational standpoint, a neural sampling or transmission system exerts adaptive quantum efficiency regulation that can be likened to division (see Text Box 2). Proportional quantal sample counting is achieved through motion-enhanced refractory transmission, sampling units, or combinations. This refractory adaptive mechanism permits a broad dynamic range, facilitating response normalisation through adaptive scaling and integration of quantal information[13,15,79]. Consequently, noise is minimised, leading to enhanced reliability of macroscopic responses[13,79].
Therefore, we expect that this efficient information maximisation strategy, which has demonstrated signal-to-noise ratios reaching several thousand in insect photoreceptors during bright saccadic stimulation[13,79] (Figure 7C), will serve as a fundamental principle for neural computations involving the sampling of quantal bursts of information, such as neurotransmitter or odorant molecules. In this context, it is highly plausible that the pre- and postsynaptic morphodynamic quantal processes of neurons have co-adapted to convert logarithmic sample rate changes into precise phasic responses with limited amplitude and frequency distributions[13,20], similar to the performance seen in fly photoreceptor[13,15,76,78,88] and first visual interneurons, LMCs[20,23]. Hence, ultrafast refractory quantal morphodynamics may represent a prerequisite for efficiently allocating information within the biophysically constrained amplitude and frequency ranges of neurons[89,137,144].

3.2. Predictive Coding and Minimal Neural Delays

Hopfield and Brody initially proposed that brain networks might employ transient synchrony as a collective mechanism for spatiotemporal integration for action potential communication[145]. Interestingly, morphodynamic quantal refractory information sampling and processing may offer the means to achieve this general coding objective.
Neural circuits incorporate predictive coding mechanisms that leverage mechanical, electrical, and synaptic feedback to minimise delays[13,15,79]. This processing, which enhances phasic inputs, synchronises the flow of information right from the first sampling stage[13,20,45]. It time-locks activity patterns into transient bursts of temporal scalability as observed in Drosophila photoreceptors’ and LMCs’ voltage responses to accelerating naturalistic light patterns (Figure 7D,E) [13,45,145]. Such phasic synchronisation and scalability are crucial for the brain to efficiently recognise and represent the changing world, irrespective of the animal’s self-motion, and predict and lock onto its moving patterns. As a result, perception becomes more accurate, and behavioural responses to dynamic stimuli are prompt.
Crucially, this adaptive scalability of phasic, graded potential responses is readily translatable to sequences of action potentials (Figure 7E, cf. the scalable spike patterns predicted from the LMC responses). Thus, ultrafast neural morphodynamics may contribute to our brain’s intrinsic ability to effortlessly capture the same meaning from a sentence, whether spoken very slowly or quickly. This dynamic form of predictive coding, which time-locks phasic neural responses to moving temporal patterns, differs markedly from the classic concept of interneurons using static centre-surround antagonism within their receptive fields to exploit spatial correlations in the natural scenes[146].
Reinforcing the idea of fast morphodynamic synchronisation as a general phenomenon, we observe minimum phase responses deeper in the brain. In experiments involving tethered flying Drosophila, electrical activity patterns recorded from their lobula/lobula plate optic lobes[64] – located at least three synapses downstream from photoreceptors - exhibit remarkably similar minimal delay responses to those seen in LMCs (Figure 8A). These first responses emerge well within 20 ms of the stimulus onset[64]. Such rapid signal transmission through multiple neurons and synapses challenges traditional models that rely on the stationary eye and brain circuits with significant synaptic (chemical), signal integration and conduction (electrical) delays.
Thus, neural processing in vivo appears more synchronised and holistic, with signals being processed in a more integrated manner across different parts of the brain. This is also reflected by the brain’s broadly distributed dynamic energy usage during activity[147]. Instead of neurons conveying information sequentially like falling dominos, neural morphodynamics and multidirectional tonic synaptic operations connect the “neural dominos” with interlinked “strings” (push and pull mechanisms), causing them to fall together. This synchronised minimal-delay information processing across the brain - from sensing to decision-making - is likely a prerequisite for complex behaviours in real time.
Moreover, in vivo high-speed X-ray imaging[15] has revealed synchronised phasic movements across the Drosophila optic lobes following the rapid microsaccades of light-activated photoreceptors (Figure 8B). Synchronised tissue movements have also been observed during 2-photon imaging of optic lobe neurons[15] (Figure 8C). In the past, such movements have been often thought to be motion artefacts, with researchers making considerable efforts to eliminate them from calcium imaging data collection.
The absence of phasic amplification and synchronisation of signals through morphodynamics would have detrimental effects on communication speed and accuracy, resulting in slower perception and behavioural responses. It would significantly prolong the time it takes for visual information from the eyes to reach decision-making circuits, increasing uncertainty and leading to a decline in overall fitness. Thus, we expect the inherent scalability of neural morphodynamic responses (as demonstrated in Figure 7D) to be crucial in facilitating efficient communication and synchronisation among different brain regions, enabling the coordination required for complex cognitive processes.
We propose that neurons exhibit morphodynamic jitter (stochastic oscillations) at the ultrastructural level sensitising the transmission system to achieve these concerted efforts. By enhancing phasic sampling, such jitter could minimise delays across the whole network, enabling interconnected circuits to respond in -sync to changes in information flow, actively co-differentiating the relevant (or attended) message stream[64]. Similarly, jitter-enhanced synchronisation could involve linking sensory (bottom-up) information about a moving object with the prediction (efference copy[148,149]) of movement-producing signals generated by the motor system, or top-down prediction of the respective self-motion[72]. Their difference signal, or prediction error, could then be used to rectify the animal’s self-motion more swiftly than without the jitter-induced delay minimisation and synchronous phase enhancement, enabling faster behavioural responses.
Historically, there has been significant interest in understanding how field potentials - transient electrical signals generated in neurons and surrounding cells through collective activity - convey or reflect synchronous brain activity, especially as frequency bands vary with an animal’s activity state[150,151,152]. Specific low-frequency bands characterise different stages of sleep[152] - ranging from Delta (0.5-4 Hz) to Beta (13-30 Hz) - while Gamma-frequency activity (30-150 Hz) is consistently observed during selective attention across species from insects[64,66,74] to humans. Recently, cytoelectric coupling[153,154] has been proposed to explain these phenomena. Regardless of the exact mechanisms, it is plausible that neural morphodynamics closely participates in this network activity or plays a synergistic role.
We also expect ultrafast morphodynamics to contribute to multisensory integration by temporally aligning inputs from diverse sensory modalities with intrinsic goal-oriented processing (Figure 8D). This cross-modal synchronisation enhances behavioural certainty[135,155]. Using synchronised phasic information, a brain network can efficiently integrate yellow colour, shuttle-like shape, rough texture, and sweet scent into a unique neural representation, effectively identifying a lemon amidst the clutter and planning an appropriate action. These ultrafast combinatorial and distributed spatiotemporal responses expand the brain’s capacity to encode information, increasing its representational dimensionality[156] beyond what could be achieved through slower processing in static circuits. Thus, the phasic nature of neural morphodynamics may enable animals to think and behave faster and more flexibly.

3.3. Anti-Aliasing and Robust Communication

Neural morphodynamics incorporates anti-aliasing sampling and signalling mechanisms within the peripheral nervous system[13,15,157] to prevent the distortion of sensory information. Like Drosophila compound eyes, photoreceptors in the primate retina exhibit varying sizes[158], movements[7] and partially overlapping receptive fields[159]. Along with stochastic rhodopsin choices[117] (cf. Figure 5B, inset), microstructural and synaptic variations[160], these characteristics should create a stochastically heterogeneous sampling matrix free of spatiotemporal aliasing[13,15,76,81]. By enhancing sampling speed and phasic integration of changing information through heterogeneous channels, ultrafast morphodynamics reduces ambiguity in interpreting sensory stimuli and enhances the brain’s “frame-rate” of perception. Such clear evolutionary benefits suggest that analogous morphodynamic processing would also be employed in central circuit processes for thinking and planning actions.
Furthermore, the inherent flexibility of neural morphodynamics using moving sampling units to collect and transmit information should help the brain maintain its functionality even when damaged, thus contributing to its resilience and recovery mechanisms. By using oscillating movements to enhance transmission and parallel information channels streaming overlapping content[105], critical phasic information could potentially bypass or reroute around partially damaged neural tissue. This morphodynamic adaptability equips the brain to offset disturbances and continue information processing. As a result, brain morphodynamics ensures accurate sensory representation and bolsters neural communication’s robustness amidst challenges or impairments.

3.4. Efficiency of Encoding Space in Time

Neural morphodynamics boosts the efficiency to encode space in time[13,15], allowing smaller mobile sense organs - like compact compound eyes with fewer ommatidia - to achieve the spatial resolution equivalent to larger stationary sense organs (cf. Figure 5C and Figure 6C). The resulting ultrafast phasic sampling and transmission expedite sensory processing, while the reduced signal travel distance promotes faster perception, more efficient locomotion and decreases energy consumption. Therefore, we can postulate that between two brains of identical size, if one incorporates ultrafast morphodynamics across its neural networks while the other does not, the brain using morphodynamics has a higher information processing capacity. Its faster and more efficient information processing should enhance cognitive abilities and decision-making capabilities. In this light, for evolution to select neural morphodynamics as a pathway for optimising the brains would be a no-brainer.

3.5. Expanding Dimensionality in Encoding Space for Cognitive Proficiency

But how do insects, with their tiny brains usually containing fewer than a million neurons, short adult lifespans - with some living just days or weeks - and limited learning opportunities, develop sophisticated cognitive abilities? How might neural morphodynamics contribute to balancing genetic predispositions and environmental influences to optimise the use of their tiny brains? The world’s object feature space (input space) is vastly larger than the number of neurons (output space) in the insect brain. Therefore, to efficiently map inputs to outputs, their brain circuits must perform space-saving and cost-efficient encoding, where single neurons contribute to multiple network functions [161]. In other words, the circuits must map object information into combinatorial and distributed feature representations to expand the encoding space. Doing this quickly by phasic (morphodynamic, and thus hyperacute) neural responses expands the networks’ dimensionality in encoding space beyond any static system of equal size.
Using this framework, we can consider, for example, that the form/function relationships of the insect central complex[162] (involved in navigation) and mushroom body[163] (involved in visual learning) circuits are physical manifestations of the algorithms they execute. Genetic information may establish the circuits’ x- and y-coordinates within the visual space, while the object’s binocular timing differences provide its z-coordinate (depth) and size, reflected as a neural activity pattern on these circuits. As the object moves, this activity pattern moves phasically in sync with the object, assisted by morphodynamics to maintain high spatiotemporal sampling resolution while ensuring the representations remain associable and generalisable. The circuits map the object feature space so that similar objects generate similar activity patterns, while different objects generate distinct patterns - comparable to Kohonen’s self-organising neural projections[164] and the perceptual colour map in the macaque visual area V4 [165]. The central complex circuits map object position and orientation (“Where”), while the mushroom body circuits map independent chromatic components and context (“What”). Although the varying roles of central brain circuits in navigation and learning are being progressively mapped and modelled [135,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182], the synchronous encoding patterns and morphodynamic activity across the circuits have not been extensively studied.
Genetic information plays a crucial role in constructing the brain’s representation of the world during development and maturation, enabling impressive goal-oriented innate behaviours. Neurophysiological studies have even identified an insect’s body coordinates relative to environmental patterns within the seemingly disorganised “neural spaghetti” of optic glomeruli [183] However, adult insect brains, such as those of bees, retain plasticity and the capacity for complex learning, even allowing for the transfer of cultural information (see examples in Text Box 3). These cognitive feats cannot be adequately explained by conventional input-output models of neural information processing or by neurons’ static structure/function relationships.
Supporting this, studies in Drosophila have shown that the same neural circuits can serve multiple functions depending on the activity state. These range from representing the world to planning future actions and behaviours [64,69,71,72]- efficiently utilising networks to balance bottom-up sensory input and top-down stored executive information in driving behaviour. In other words, Drosophila may simultaneously process interacting thoughts and perceptions as phasic morphodynamic activity in the same space-distributed memory [184] networks. Consequently, it is conceivable that we may soon discover that insect brain networks capable of recognising objects also participate in planning and even dreaming, similar to the bidirectional information flow observed in the human visual cortex [185,186,187].
By harnessing and fine-tuning genetic information, neural morphodynamics efficiently capture sensory input and utilise predetermined retinotopic or body-centric feature maps. This process facilitates perception and enables the emergence of sophisticated insect behaviour, such as learning to differentiate objects using hyperacute stereovision [15]. We propose that the morphodynamic, electrical, and chemical interplay within neurons ‘animates’ the brain’s representation of the world (Text box 3). This dynamic structure/function interaction gives rise to thoughts and perceptions that continuously shape and transform neural landscapes while maintaining the capacity for impressive cognitive abilities and efficient environmental learning.
Teat Box 3. Challenging Static Models of Insect Cognition. Insects, with their short lifespans and miniaturised neural architectures, were once regarded as simple automatons, restricted to executing pre-programmed responses to specific stimuli. This view aligned with static cognitive models, which have since been called into question. Recent research has demonstrated that these models fail to adequately explain the sophisticated cognitive abilities exhibited by insects (each panel shows data adapted from the cited papers).
Teat Box 3. Challenging Static Models of Insect Cognition. Insects, with their short lifespans and miniaturised neural architectures, were once regarded as simple automatons, restricted to executing pre-programmed responses to specific stimuli. This view aligned with static cognitive models, which have since been called into question. Recent research has demonstrated that these models fail to adequately explain the sophisticated cognitive abilities exhibited by insects (each panel shows data adapted from the cited papers).
Preprints 117016 g011
(A) Acquisition of Novel Behaviour. Insects can acquire new behaviours through two primary pathways: individual trial-and-error learning and social learning from knowledgeable conspecifics. In both cases, neural morphodynamics likely play a crucial role in enabling the brain to efficiently adapt and reconfigure its local structures, optimising learning performance in response to changing environmental conditions and experiences.
The use of non-natural paradigms -situations that insects would not typically encounter in their natural environments - further emphasises the non-innate nature of these behaviours. Despite their unfamiliarity with the following tasks, insects demonstrated remarkable adaptability and learning capacities:
i.
Bumblebees can learn to pull strings to obtain out-of-reach rewards, both through individual learning and social transmission[188].
ii.
Bumblebees can socially learn a complex two-step behaviour that they cannot learn individually - previously thought to be a human-exclusive capability underlying our species’ cumulative culture[189].
iii.
In laboratory settings, bumblebees can acquire local variations of novel behaviours as a form of culture, even though such behaviours are not observed in the wild[190].
(B) Flexible Optimisation of Behaviour. In response to changing environmental demands, insects can successfully and flexibly optimise their behaviour to improve their fitness.
i.
Ants (Aphaenogaster spp.) select and modify tools based on their soaking properties and the viscosity of food sources. They not only learn to use novel objects like sponges and paper as tools but also modify these objects by tearing them into smaller, more manageable pieces[191,192].
ii.
Bumblebees optimise their foraging routes between multiple locations, effectively solving the “travelling salesman problem” by reducing flight distance and duration with experience[193].
iii.
Bumblebees can be trained to roll a ball to a marked location for a reward. After observing knowledgeable conspecifics, they not only learn this behaviour but also generalise it to novel balls, preferring the more efficient option even if this differed from the option used by their conspecifics[194].
(C) Integration of Information Across Multiple Sensory Modalities The ability to recognise objects across different sensory modalities is inherently adaptive[195,196], leading to richer and more accurate environmental representations. This cognitive ability likely plays a role in the processes described in sections A and B.
i.
Bumblebees (Bombus terrestris) can recognise three-dimensional objects, such as spheres and cubes, by touch if they have only seen them and by sight if they have only touched them[155].
ii.
Honeybees (Apis mellifera) can interpret the waggle dance of successful foragers in darkness by detecting the dancer’s movements with their antennae. They then translate these movements into an accurate flight vector encoding distance and direction relative to the sun[195].
Nevertheless, given an adult insect’s brief life and limited opportunities for direct learning, most insect brain functions and behaviours must be pre-shaped during development. This suggests that genetic information plays an instructorial role in guiding the physiology and development of neural networks by simulating environmental conditions, helping these networks organise and adapt effectively. Through this gene-driven “virtual reality training,” bound by real-world regularities[46,137,197], an organism’s self-focused functionality gradually emerges under conscious control. These processes likely involve morphodynamics to accelerate and synchronise neural representations for various behavioural eventualities, essentially “seeding” spatiotemporal acuity in innate intelligence. Environmental experiences during stages such as metamorphosis, particularly in the larval phase, may further refine the emerging adult’s brain map of the world, enhancing its ability to navigate and respond to its surroundings. This combination of genetic knowledge (“ancestral memories”), which establishes the brain’s world map with interconnected local feature maps and drives morphodynamic neural activity (“intrinsic simulations”) to self-organize them, along with tuning by the embryonic environment during development, may explain the proficiency of innate behaviours. For example, a worker bee can perform complex tasks with minimal or no external training, and similarly, in the case of mammals, a newborn calf instinctively knows how to stand, find its mother, and begin suckling.

4. Future Avenues of Research

4.1. Investigating the Integration of Ultrafast Morphodynamics Changes

One area of interest is understanding how the brain and behaviour can effectively synchronise with rapid morphodynamic changes, such as adaptive quantal sample rate modifications within the sensory receptor matrix and synaptic information transfer. A fundamental question pertains to how neural morphodynamics enhances the efficiency and speed of synaptic signal transmission. Is there a morphodynamic adaptation of synaptic vesicle sizes and quantities[20] that maximises information transfer? It is plausible that synaptic vesicle sizes and numbers adapt morphodynamically to ensure efficient information transfer, potentially using a running memory of the previous activity to optimise how transmitter molecule quantities scale in response to environmental information changes (cf. Figure 7A,B). This adaptive process might involve rapid exo- or endocytosis-linked movements of transmitter-receptor complexes. Furthermore, it is worthwhile to explore how brain morphodynamics adaptively regulates the synaptic cleft and optimises the proximity of neurotransmitter receptors to optimise signal transmission.

4.2. Genetically Enhancing Signalling Performance and Speed

Another avenue of research involves investigating the possibility of genetically enhancing signalling performance and speed to control behaviours. This exploration can delve into how genetic modifications may improve the efficiency and speed of signal processing in the brain. By manipulating genes to change neurons’ physical properties, such as increasing the number of photoreceptor phototransduction units or neurotransmitter-receptor complexes or accelerating their biochemical reactions, it may be possible to enhance the performance and speed of signalling[78,80], ultimately influencing behavioural responses. By further investigating these aspects of brain morphodynamics, we can gain deeper insights into the mechanisms underlying efficient information processing, synaptic signal transfer, and behavioural control.
For example, CRISPR-Cas9 gene-editing, by adding, removing, or altering specific genes associated with molecular motors or mechanoreceptive ion channels within neurons, provides means to elucidate these genes’ functions and their roles in neural morphodynamics.

4.3. Neural Activity Synchronisation

It is crucial to understand how neural morphodynamics synchronises brain activity within specific networks in a goal-directed manner and to comprehend the effects of changes in brain morphodynamics during maturation and learning on brain function and behaviour. Modern machine learning techniques now enable us to establish and quantify the contribution of brain morphodynamics to learning-induced structural and functional changes, and behaviour.
For instance, we can employ a deep learning approach to study how Drosophila’s compound eyes use photoreceptor movements to attain hyperacuity[198]. Could an artificial neural network (ANN), equipped with precisely positioned and photomechanically moving photoreceptors to process and transmit visual information to a lifelike-wired lamina connectome (cf. Figure 2 and Figure 4), reproduce the natural response dynamics of real flies, thereby surpassing their optical pixelation limit? By systematically altering sampling dynamics and synaptic connections in an ANN-based compound eye model, it is now possible to test whether the performance falters without the realistic orientation-tuned photoreceptor movements and connectome and the eye loses its hyperacuity.

4.4. Perception Enhancement

Neural morphodynamics mechanisms can enhance perception by implementing biomechanical feedback signals to photoreceptors via feedback synapses[25] to improve object detection against backgrounds. An object’s movement makes detecting it from the background easier[199]. When interested in a particular object in a specific position, could the brain send attentive[64,66] feedback signals to a set of photoreceptors, in which receptive fields point at that position, to make them contract electromechanically, causing the object to ‘jump’? This approach would enhance the object boundaries from its background[200]. Such biomechanical feedback would be the most efficient way to self-generate pop-up attention at the level of the sampling matrix.

5. Conclusion and Future Outlook

Theory of neural morphodynamics offers a new perspective on brain function and behaviour, providing a unified framework that shifts from reductionism to holistic constructionism. It utilises observed neural signals - both micro- and macroscopic - as information carriers[13,15,89,201,202] rather than their assumed abstractions. This approach links neural structures to functions in space-time across multiple scales for a deeper understanding of the brain. By addressing the key questions and conducting further research, we can explore the applications of ultrafast morphodynamics for neurotechnologies (see Text Box 4). These applications may enhance perception, improve artificial systems, and lead to the development of biomimetic devices and robots capable of sophisticated sensory processing and decision-making.
Text Box 4. Transforming AI with Morphodynamic Principles: Next-Gen Autonomy and Vision. The concept of morphodynamic information processing in the brain has significant implications for developing bio-inspired machine intelligence, vision engines, and neuromorphic accelerators, particularly in autonomous systems. Currently, AI and autonomous vehicles rely heavily on static sensor arrays, networks, and mostly pre-programmed algorithms to interpret the environment and make driving decisions. By emulating the brain’s ability to adapt its structure and function to varying stimuli dynamically, engineers can design AI systems that are more responsive and efficient [203,204]. For instance, morphodynamic neural computation can greatly enhance sensory and decision-making systems in autonomous vehicles. By integrating morphodynamic principles, these vehicles could develop more adaptive and resilient perception capabilities, allowing them to better detect and respond to sudden changes in their surroundings, such as unexpected pedestrians or obstacles. They could dynamically adjust their information processing to prioritise the most critical inputs, thereby improving the robustness and versatility of autonomous machines in complex, unpredictable environments.
Text Box 4. Transforming AI with Morphodynamic Principles: Next-Gen Autonomy and Vision. The concept of morphodynamic information processing in the brain has significant implications for developing bio-inspired machine intelligence, vision engines, and neuromorphic accelerators, particularly in autonomous systems. Currently, AI and autonomous vehicles rely heavily on static sensor arrays, networks, and mostly pre-programmed algorithms to interpret the environment and make driving decisions. By emulating the brain’s ability to adapt its structure and function to varying stimuli dynamically, engineers can design AI systems that are more responsive and efficient [203,204]. For instance, morphodynamic neural computation can greatly enhance sensory and decision-making systems in autonomous vehicles. By integrating morphodynamic principles, these vehicles could develop more adaptive and resilient perception capabilities, allowing them to better detect and respond to sudden changes in their surroundings, such as unexpected pedestrians or obstacles. They could dynamically adjust their information processing to prioritise the most critical inputs, thereby improving the robustness and versatility of autonomous machines in complex, unpredictable environments.
Preprints 117016 g012
(A) Morphodynamic principles, particularly those that mimic biological photoreceptors, have the potential to revolutionise machine vision systems. By incorporating adaptive, rapid, and active sampling strategies observed in nature[13,15,33,35,67,205], morphodynamic-driven digital cameras could achieve unprecedented levels of spatial and temporal resolution while maintaining low computational power and requiring fewer light sensors [206,207,208]. Combining these principles with multiscale active sampling from eye, head, and body movements would enable machines to perceive and interpret their surroundings with greater efficiency and coherence, even under highly variable lighting and environmental conditions[205,209,210]. For example, a morphodynamic-inspired vision system could adjust its sensitivity and focus in real time, similar to how biological visual systems adapt to different lighting conditions and movement speeds. This capability would be particularly valuable in autonomous vehicles, drones, and robots, where quick and precise environmental interpretation is crucial for safe and effective operation.
(B) Morphodynamic neural computation also offers a novel approach to developing neuromorphic models by introducing dynamic elements into what has traditionally been a largely static framework. In conventional neuromorphic models, neural connections - particularly synaptic connections - are often assumed to be fixed or to change slowly over time based on pre-defined learning rules[211]. However, this static assumption limits the models’ ability to fully capture the rapid and adaptive nature of biological neural networks. By incorporating the concept of synaptic connection movements - where synapses can shift, adapt, or reconfigure quickly in response to stimuli - morphodynamic neural computation adds a complementary layer of information processing[212]. These fast, dynamic adjustments allow the neural network to actively modify its structure in real-time, enhancing its ability to process complex and variable sensory inputs. This additional layer of morphodynamic processing enables the network to encode and transmit information not only through the strength of synaptic connections but also through their morphodynamic rearrangement (B, right). This dynamic behaviour mirrors biological processes, where synaptic plasticity and structural changes contribute to learning and memory. In neuromorphic models, these morphodynamic elements can lead to improved performance in tasks requiring rapid adaptation, such as real-time decision-making and pattern recognition in unpredictable environments. By incorporating these principles, neuromorphic systems can become more flexible and responsive, offering a richer and more nuanced approach to artificial neural computation.
(C) Furthermore, integrating fast adaptation, efficient processing, and predictive coding mechanisms observed in neural morphodynamics into AI and robotics could significantly enhance the development of anticipatory and context-aware systems[213,214]. By leveraging dynamic synchronisation and phasic information sampling, these bio-inspired AI models would not merely respond to stimuli but also anticipate future states, facilitating proactive decision-making and more seamless interactions with their environment. For instance, the predictive coding aspects of morphodynamic computation could enable autonomous vehicles to foresee rapid environmental changes and accurately predict the movements of other vehicles and pedestrians, resulting in smoother navigation and enhanced safety - an improvement over current AI models that typically rely on static processing frameworks and struggle with real-time adaptation and prediction. Similarly, in robotics, morphodynamic neural computation could revolutionise motor control and sensory integration [214,215]. By emulating the brain’s ability to synchronise and adjust neural responses to varying stimuli, robots could achieve more fluid and natural movements. For example, a drone equipped with morphodynamic-inspired control systems could dynamically modulate its grip strength and precision in response to the texture, shape, and spatial relationship of objects, similar to how humans instinctively adjust their motor output based on sensory feedback [216]. Furthermore, incorporating predictive coding mechanisms would enable such robots to anticipate the outcomes of their actions, promoting more coordinated and efficient interactions within three-dimensional environments and enhancing navigational capabilities.

Author Contributions

MJ (Writing – original draft; Writing – review and editing; Visualisation), JT (Writing – review and editing), JK (Writing – review and editing; Visualisation), KRH (Writing – review and editing), BS (Writing – review and editing), JM (Writing – review and editing), AB (Writing – review and editing; Visualisation), HM (Writing – review and editing; Visualisation) and LC (Writing – review and editing).

Acknowledgments

We thank G. de Polavieja, G. Belušič, B. Webb, J. Howard, M. Göpfert, A. Lazar, P. Verkade, R. Mokso, S. Goodwin, M. Mangan, S.-C. Liu, P. Kuusela, R.C. Hardie and J Bennett for fruitful discussions. We thank G. de Polavieja, G. Belušič, S. Goodwin, R.C. Hardie and anonymous reviewers for comments on the manuscript. This work was supported by BBSRC (BB/F012071/1 and BB/X006247/1), EPSRC (EP/P006094/1 and EP/X019705/1) and Leverhulme (RPG-2024-016) grants to MJ, Horizon Europe Framework Programme grant NimbleAI grant to LC, and BBSRC White-rose studentship (1945521) to BS.

Declaration of interests

The authors declare no competing interests.

References

  1. Eichler, K., Li, F., Litwin-Kumar, A., Park, Y., Andrade, I., Chneider-Mizell, C.M.S., Saumweber, T., Huser, A., Eschbach, C., Gerber, B., et al. (2017). The complete connectome of a learning and memory centre in an insect brain. Nature 548, 175-182. [CrossRef]
  2. Oh, S.W., Harris, J.A., Ng, L., Winslow, B., Cain, N., Mihalas, S., Wang, Q.X., Lau, C., Kuan, L., Henry, A.M., et al. (2014). A mesoscale connectome of the mouse brain. Nature 508, 207-214. [CrossRef]
  3. Rivera-Alba, M., Vitaladevuni, S.N., Mischenko, Y., Lu, Z.Y., Takemura, S.Y., Scheffer, L., Meinertzhagen, I.A., Chklovskii, D.B., and de Polavieja, G.G. (2011). Wiring economy and volume exclusion determine neuronal placement in the Drosophila brain. Curr Biol 21, 2000-2005. [CrossRef]
  4. Winding, M., Pedigo, B.D., Barnes, C.L., Patsolic, H.G., Park, Y., Kazimiers, T., Fushiki, A., Andrade, I.V., Khandelwal, A., Valdes-Aleman, J., et al. (2023). The connectome of an insect brain. Science 379, 995-1013. ARTN eadd9330. [CrossRef]
  5. Meinertzhagen, I.A., and O’Neil, S.D. (1991). Synaptic organization of columnar elements in the lamina of the wild type in Drosophila melanogaster. J Comp Neurol 305, 232-263. [CrossRef]
  6. Hudspeth, A.J. (2008). Making an effort to listen: mechanical amplification in the ear. Neuron 59, 530-545. [CrossRef]
  7. Pandiyan, V.P., Maloney-Bertelli, A., Kuchenbecker, J.A., Boyle, K.C., Ling, T., Chen, Z.C., Park, B.H., Roorda, A., Palanker, D., and Sabesan, R. (2020). The optoretinogram reveals the primary steps of phototransduction in the living human eye. Sci Adv 6. [CrossRef]
  8. Hardie, R.C., and Franze, K. (2012). Photomechanical responses in Drosophila photoreceptors. Science 338, 260-163. [CrossRef]
  9. Bocchero, U., Falleroni, F., Mortal, S., Li, Y., Cojoc, D., Lamb, T., and Torre, V. (2020). Mechanosensitivity is an essential component of phototransduction in vertebrate rods. PLoS Biol 18, e3000750. [CrossRef]
  10. Korkotian, E., and Segal, M. (2001). Spike-associated fast contraction of dendritic spines in cultured hippocampal neurons. Neuron 30, 751-758. [CrossRef]
  11. Majewska, A., and Sur, M. (2003). Motility of dendritic spines in visual cortex in vivo: changes during the critical period and effects of visual deprivation. Proc Natl Acad Sci U S A 100, 16024-16029. [CrossRef]
  12. Crick, F. (1982). Do dendritic spines twitch? Trends in Neurosciences 5, 44-46.
  13. Juusola, M., Dau, A., Song, Z., Solanki, N., Rien, D., Jaciuch, D., Dongre, S.A., Blanchard, F., de Polavieja, G.G., Hardie, R.C., and Takalo, J. (2017). Microsaccadic sampling of moving image information provides Drosophila hyperacute vision. Elife 6. [CrossRef]
  14. Kemppainen, J., Mansour, N., Takalo, J., and Juusola, M. (2022). High-speed imaging of light-induced photoreceptor microsaccades in compound eyes. Commun Biol 5, 203. [CrossRef]
  15. Kemppainen, J., Scales, B., Razban Haghighi, K., Takalo, J., Mansour, N., McManus, J., Leko, G., Saari, P., Hurcomb, J., Antohi, A., et al. (2022). Binocular mirror-symmetric microsaccadic sampling enables Drosophila hyperacute 3D vision. Proc Natl Acad Sci U S A 119, e2109717119. [CrossRef]
  16. Senthilan, P.R., Piepenbrock, D., Ovezmyradov, G., Nadrowski, B., Bechstedt, S., Pauls, S., Winkler, M., Mobius, W., Howard, J., and Gopfert, M.C. (2012). Drosophila auditory organ genes and genetic hearing defects. Cell 150, 1042-1054. [CrossRef]
  17. Reshetniak, S., and Rizzoli, S.O. (2021). The vesicle cluster as a major organizer of synaptic composition in the short-term and long-term. Curr Opin Cell Biol 71, 63-68. [CrossRef]
  18. Reshetniak, S., Ussling, J.E., Perego, E., Rammner, B., Schikorski, T., Fornasiero, E.F., Truckenbrodt, S., Koster, S., and Rizzoli, S.O. (2020). A comparative analysis of the mobility of 45 proteins in the synaptic bouton. Embo J 39. ARTN e104596. [CrossRef]
  19. Rusakov, D.A., Savtchenko, L.P., Zheng, K.Y., and Henley, J.M. (2011). Shaping the synaptic signal: molecular mobility inside and outside the cleft. Trends Neurosci 34, 359-369. [CrossRef]
  20. Juusola, M., Uusitalo, R.O., and Weckstrom, M. (1995). Transfer of graded potentials at the photoreceptor-interneuron synapse. J Gen Physiol 105, 117-148. [CrossRef]
  21. Fettiplace, R., Crawford, A.C., and Kennedy, H.J. (2006). Signal transformation by mechanotransducer channels of mammalian outer hair cells. Auditory Mechanisms: Processes and Models, 245-253. [CrossRef]
  22. Kennedy, H.J., Crawford, A.C., and Fettiplace, R. (2005). Force generation by mammalian hair bundles supports a role in cochlear amplification. Nature 433, 880-883. [CrossRef]
  23. Juusola, M., French, A.S., Uusitalo, R.O., and Weckstrom, M. (1996). Information processing by graded-potential transmission through tonically active synapses. Trends Neurosci 19, 292-297. [CrossRef]
  24. Watanabe, S., Rost, B.R., Camacho-Perez, M., Davis, M.W., Sohl-Kielczynski, B., Rosenmund, C., and Jorgensen, E.M. (2013). Ultrafast endocytosis at mouse hippocampal synapses. Nature 504, 242-247. [CrossRef]
  25. Zheng, L., de Polavieja, G.G., Wolfram, V., Asyali, M.H., Hardie, R.C., and Juusola, M. (2006). Feedback network controls photoreceptor output at the layer of first visual synapses in Drosophila. J Gen Physiol 127, 495-510. [CrossRef]
  26. Joy, M.S.H., Nall, D.L., Emon, B., Lee, K.Y., Barishman, A., Ahmed, M., Rahman, S., Selvin, P.R., and Saif, M.T.A. (2023). Synapses without tension fail to fire in an in vitro network of hippocampal neurons. Proc Natl Acad Sci U S A 120, e2311995120. [CrossRef]
  27. Shusterman, R., Smear, M.C., Koulakov, A.A., and Rinberg, D. (2011). Precise olfactory responses tile the sniff cycle. Nat Neurosci 14, 1039–1044. [CrossRef]
  28. Smear, M., Shusterman, R., O’Connor, R., Bozza, T., and Rinberg, D. (2011). Perception of sniff phase in mouse olfaction. Nature 479, 397–400. [CrossRef]
  29. Bush, N.E., Solla, S.A., and Hartmann, M.J.Z. (2016). Whisking mechanics and active sensing. Curr Opin Neurobiol 40, 178-186. [CrossRef]
  30. Daghfous, G., Smargiassi, M., Libourel, P.A., Wattiez, R., and Bels, V. (2012). The function of oscillatory tongue-flicks in snakes: insights from kinematics of tongue-flicking in the banded water snake (Nerodia fasciata). Chem Senses 37, 883-896. [CrossRef]
  31. Davies, A., Louis, M., and Webb, B. (2015). A model of Drosophila larva chemotaxis. Plos Comp Biol 11. ARTN e1004606. [CrossRef]
  32. Gomez-Marin, A., Stephens, G.J., and Louis, M. (2011). Active sampling and decision making in Drosophila chemotaxis. Nature Comm 2. ARTN 441. [CrossRef]
  33. van Hateren, J.H., and Schilstra, C. (1999). Blowfly flight and optic flow II. Head movements during flight. J Exp Biol 202, 1491-1500.
  34. Land, M. (2019). Eye movements in man and other animals. Vis Res 162, 1-7. [CrossRef]
  35. Fenk, L.M., Avritzer, S.C., Weisman, J.L., Nair, A., Randt, L.D., Mohren, T.L., Siwanowicz, I., and Maimon, G. (2022). Muscles that move the retina augment compound eye vision in Drosophila. Nature 612, 116-122. [CrossRef]
  36. Franceschini, N., Chagneux, R., and Kirschfeld, K. (1995). Gaze control in flies by co-ordinated action of eye muscles.
  37. Franceschini, N. (1998). Combined optical neuroanatomical, electrophysiological and behavioural studies on signal processing in the fly compound eye. In Biocybernetics of Vision: Integrative Mechanisms and Cognitive Processes, C. Taddei-Ferretti, ed. (World Scientific), pp. 341-361.
  38. Schutz, A.C., Braun, D.I., and Gegenfurtner, K.R. (2011). Eye movements and perception: a selective review. J Vis 11. [CrossRef]
  39. Rucci, M., Iovin, R., Poletti, M., and Santini, F. (2007). Miniature eye movements enhance fine spatial detail. Nature 447, 851-854. [CrossRef]
  40. Casile, A., Victor, J.D., and Rucci, M. (2019). Contrast sensitivity reveals an oculomotor strategy for temporally encoding space. Elife 8. ARTN e40924. [CrossRef]
  41. Intoy, J., Li, Y.H., Bowers, N.R., Victor, J.D., Poletti, M., and Rucci, M. (2024). Consequences of eye movements for spatial selectivity. Current Biology 34. [CrossRef]
  42. Rucci, M., and Victor, J.D. (2015). The unsteady eye: an information-processing stage, not a bug. Trends in Neurosciences 38, 195-206. [CrossRef]
  43. Qi, L.J., Iskols, M., Greenberg, R.S., Xiao, J.Y., Handler, A., Liberles, S.D., and Ginty, D.D. (2024). Krause corpuscles are genital vibrotactile sensors for sexual behaviours. Nature 630. [CrossRef]
  44. Schoneich, S., and Hedwig, B. (2010). Hyperacute directional hearing and phonotactic steering in the cricket (Gryllus bimaculatus deGeer). Plos One 5. ARTN e15141. [CrossRef]
  45. Zheng, L., Nikolaev, A., Wardill, T.J., O’Kane, C.J., de Polavieja, G.G., and Juusola, M. (2009). Network adaptation improves temporal representation of naturalistic stimuli in Drosophila eye: I dynamics. PLoS One 4, e4307. [CrossRef]
  46. Barlow, H.B. (1961). Possible principles underlying the transformations of sensory messages. In Sensory Communication, W. Rosenblith, ed. (M.I.T. Press), pp. 217-234.
  47. Darwin, C. (1859). On the origin of species by means of natural selection, or the preservation of favoured races in the struggle for life (John Murray).
  48. Sheehan, M.J., and Tibbetts, E.A. (2011). Specialized face learning Is associated with individual recognition in paper wasps. Science 334, 1272-1275. [CrossRef]
  49. Miller, S.E., Legan, A.W., Henshaw, M.T., Ostevik, K.L., Samuk, K., Uy, F.M.K., and Sheehan, M.J. (2020). Evolutionary dynamics of recent selection on cognitive abilities. Proc Natl Acad Sci U S A 117, 3045-3052. [CrossRef]
  50. Kacsoh, B.Z., Lynch, Z.R., Mortimer, N.T., and Schlenke, T.A. (2013). Fruit flies medicate offspring after seeing parasites. Science 339, 947-950. [CrossRef]
  51. Nöbel, S., Monier, M., Villa, D., Danchin, E., and Isabel, G. (2022). 2-D sex images elicit mate copying in fruit fies. Sci Rep-Uk 22. [CrossRef]
  52. Land, M.F. (1997). Visual acuity in insects. Ann Rev Entomol 42, 147-177. [CrossRef]
  53. Laughlin, S.B. (1989). The role of sensory adaptation in the retina. J Exp Biol 146, 39-62.
  54. Laughlin, S.B., van Steveninck, R.R.D., and Anderson, J.C. (1998). The metabolic cost of neural information. Nat Neurosci 1, 36-41. [CrossRef]
  55. Schilstra, C., and Van Hateren, J.H. (1999). Blowfly flight and optic flow I. Thorax kinematics and flight dynamics. J Exp Biol 202, 1481-1490.
  56. Guiraud, M., Roper, M., and Chittka, L. (2018). High-speed videography reveals how honeybees can turn a spatial concept learning task Into a simple discrimination task by stereotyped flight movements and sequential inspection of pattern elements. Front Psychol 9. [CrossRef]
  57. Nityananda, V., Skorupski, P., and Chittka, L. (2014). Can bees see at a glance? J Exp Biol 217, 1933-1939. [CrossRef]
  58. Vasas, V., and Chittka, L. (2019). Insect-inspired sequential inspection strategy enables an artificial network of four neurons to estimate numerosity. Iscience 11, 85-92. [CrossRef]
  59. Chittka, L., and Skorupski, P. (2017). Active vision: A broader comparative perspective is needed. Constr Found 13, 128-129.
  60. Sorribes, A., Armendariz, B.G., Lopez-Pigozzi, D., Murga, C., and de Polavieja, G.G. (2011). The origin of behavioral bursts in decision-making circuitry. Plos Comput Biol 7, e1002075. [CrossRef]
  61. Hardie, R.C., and Juusola, M. (2015). Phototransduction in Drosophila. Curr Opin Neurobiol 34, 37-45. [CrossRef]
  62. Tepass, U., and Harris, K.P. (2007). Adherens junctions in Drosophila retinal morphogenesis. Trends Cell Biol 17, 26-35. [CrossRef]
  63. Pick, B. (1977). Specific Misalignments of Rhabdomere Visual Axes in Neural Superposition Eye of Dipteran Flies. Biol Cybern 26, 215-224. [CrossRef]
  64. Tang, S., and Juusola, M. (2010). Intrinsic activity in the fly brain gates visual information during behavioral choices. PLoS One 5, e14455. [CrossRef]
  65. Tang, S., Wolf, R., Xu, S., and Heisenberg, M. Visual pattern recognition in Drosophila is invariant for retinal position. Science 305, 1020-1022. [CrossRef]
  66. van Swinderen, B. (2011). Attention in Drosophila. Int Rev Neurobiol 99, 51-85. [CrossRef]
  67. Geurten, B.R.H., Jahde, P., Corthals, K., and Gopfert, M.C. (2014). Saccadic body turns in walking Drosophila. Front Behav Neurosci 8. ARTN 365. [CrossRef]
  68. Blaj, G., and van Hateren, J.H. (2004). Saccadic head and thorax movements in freely walking blowflies. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 190, 861-868. [CrossRef]
  69. Chiappe, M.E. (2023). Circuits for self-motion estimation and walking control in Drosophila. Curr Opin Neurobiol 81, 102748. [CrossRef]
  70. Chiappe, M.E., Seelig, J.D., Reiser, M.B., and Jayaraman, V. (2010). Walking modulates speed sensitivity in Drosophila motion vision. Curr Biol 20, 1470-1475. [CrossRef]
  71. Fujiwara, T., Brotas, M., and Chiappe, M.E. (2022). Walking strides direct rapid and flexible recruitment of visual circuits for course control in Drosophila. Neuron 110, 2124-2138 e2128. [CrossRef]
  72. Fujiwara, T., Cruz, T.L., Bohnslav, J.P., and Chiappe, M.E. (2017). A faithful internal representation of walking movements in the Drosophila visual system. Nat Neurosci 20, 72-81. [CrossRef]
  73. Maimon, G., Straw, A.D., and Dickinson, M.H. (2010). Active flight increases the gain of visual motion processing in Drosophila. Nat Neurosci 13, 393-399. [CrossRef]
  74. Grabowska, M.J., Jeans, R., Steeves, J., and van Swinderen, B. (2020). Oscillations in the central brain of Drosophila are phase locked to attended visual features. Proc Natl Acad Sci U S A 117, 29925-29936. [CrossRef]
  75. Exner, S. (1891). Die Physiologie der facettierten Augen von Krebsen und Insecten.
  76. Song, Z., Postma, M., Billings, S.A., Coca, D., Hardie, R.C., and Juusola, M. (2012). Stochastic, adaptive sampling of information by microvilli in fly photoreceptors. Curr Biol 22, 1371-1380. [CrossRef]
  77. Stavenga, D.G. (2003). Angular and spectral sensitivity of fly photoreceptors. II. Dependence on facet lens F-number and rhabdomere type in Drosophila. J Comp Physiol A 189, 189-202. [CrossRef]
  78. Juusola, M., and Hardie, R.C. (2001). Light Adaptation in Drosophila Photoreceptors: I. Response Dynamics and Signaling Efficiency at 25°C. J Gen Physiol 117, 3-25. [CrossRef]
  79. Juusola, M., and Song, Z.Y. (2017). How a fly photoreceptor samples light information in time. J Physiol Lond 595, 5427-5437. [CrossRef]
  80. Song, Z., and Juusola, M. (2014). Refractory sampling links efficiency and costs of sensory encoding to stimulus statistics. J Neurosci 34, 7216-7237. [CrossRef]
  81. Song, Z., Zhou, Y., Feng, J., and Juusola, M. (2021). Multiscale ‘whole-cell’ models to study neural information processing - New insights from fly photoreceptor studies. J Neurosci Methods 357, 109156. [CrossRef]
  82. Gonzalez-Bellido, P.T., Wardill, T.J., and Juusola, M. (2011). Compound eyes and retinal information processing in miniature dipteran species match their specific ecological demands. Proc Natl Acad Sci U S A 108, 4224-4229. [CrossRef]
  83. Juusola, M. (1993). Linear and nonlinear contrast coding in light-adapted blowfly photoreceptors. J Comp Physiol A 172, 511-521. [CrossRef]
  84. Ditchburn, R.W., and Ginsborg, B.L. (1952). Vision with a stabilized retinal image. Nature 170, 36-37. [CrossRef]
  85. Riggs, L.A., and Ratliff, F. (1952). The effects of counteracting the normal movements of the eye. J Opt Soc Am 42, 872-873.
  86. Borst, A. (2009). Drosophila’s view on insect vision. Curr Biol 19, 36-47. [CrossRef]
  87. Juusola, M., Dau, A., Zheng, L., and Rien, D.N. (2016). Electrophysiological method for recording intracellular voltage responses of photoreceptors and interneurons to light stimuli. Jove-J Vis Exp. ARTN e54142. [CrossRef]
  88. Juusola, M., and Hardie, R.C. (2001). Light adaptation in Drosophila photoreceptors: II. Rising temperature increases the bandwidth of reliable signaling J Gen Physiol 117, 27–42.
  89. Juusola, M., and de Polavieja, G.G. (2003). The rate of information transfer of naturalistic stimulation by graded potentials. J Gen Physiol 122, 191-206. [CrossRef]
  90. Juusola, M., Song, Z., and Hardie, R.C. (2022). Phototransduction Biophysics. In Encyclopedia of Computational Neuroscience, D. Jaeger, and R. Jung, eds. (Springer), pp. 2758-2776. [CrossRef]
  91. Hochstrate, P., and Hamdorf, K. (1990). Microvillar components of light adaptation in blowflies. J Gen Physiol 95, 891-910. [CrossRef]
  92. Pumir, A., Graves, J., Ranganathan, R., and Shraiman, B.I. (2008). Systems analysis of the single photon response in invertebrate photoreceptors. Proc Natl Acad Sci U S A 105, 10354-10359. [CrossRef]
  93. Howard, J., Blakeslee, B., and Laughlin, S.B. (1987). The intracellular pupil mechanism and photoreceptor signal: noise ratios in the fly Lucilia cuprina. Proc R Soc Lond B Biol Sci 231, 415-435. [CrossRef]
  94. Mishra, P., Socolich, M., Wall, M.A., Graves, J., Wang, Z., and Ranganathan, R. (2007). Dynamic scaffolding in a G protein-coupled signaling system. Cell 131, 80-92. [CrossRef]
  95. Scott, K., Sun, Y., Beckingham, K., and Zuker, C.S. (1997). Calmodulin regulation of Drosophila light-activated channels and receptor function mediates termination of the light response in vivo. Cell 91, 375-383. [CrossRef]
  96. Liu, C.H., Satoh, A.K., Postma, M., Huang, J., Ready, D.F., and Hardie, R.C. (2008). Ca2+-dependent metarhodopsin inactivation mediated by calmodulin and NINAC myosin III. Neuron 59, 778-789. [CrossRef]
  97. Wong, F., and Knight, B.W. (1980). Adapting-bump model for eccentric cells of Limulus. J Gen Physiol 76, 539-557. [CrossRef]
  98. Wong, F., Knight, B.W., and Dodge, F.A. (1980). Dispersion of latencies in photoreceptors of Limulus and the adapting-bump model. J Gen Physiol 76, 517-537. [CrossRef]
  99. Hardie, R.C., and Postma, M. (2008). Phototransduction in microvillar photoreceptors of Drosophila and other invertebrates. In The senses: a comprehensive reference. Vision, A.I. Basbaum, A. Kaneko, G.M. Shepherd, and G. Westheimer, eds. (Academic), pp. 77-130.
  100. Postma, M., Oberwinkler, J., and Stavenga, D.G. (1999). Does Ca2+ reach millimolar concentrations after single photon absorption in Drosophila photoreceptor microvilli? Biophys J 77, 1811-1823. [CrossRef]
  101. Hardie, R.C. (1996). INDO-1 measurements of absolute resting and light-induced Ca2+ concentration in Drosophila photoreceptors. J Neurosci 16, 2924-2933. [CrossRef]
  102. Friederich, U., Billings, S.A., Hardie, R.C., Juusola, M., and Coca, D. (2016). Fly Photoreceptors Encode Phase Congruency. PLoS One 11, e0157993. [CrossRef]
  103. Faivre, O., and Juusola, M. (2008). Visual coding in locust photoreceptors. Plos One 3. ARTN e2173. [CrossRef]
  104. Sharkey, C.R., Blanco, J., Leibowitz, M.M., Pinto-Benito, D., and Wardill, T.J. (2020). The spectral sensitivity of photoreceptors. Sci Rep-Uk 10. ARTN18242. [CrossRef]
  105. Wardill, T.J., List, O., Li, X., Dongre, S., McCulloch, M., Ting, C.Y., O’Kane, C.J., Tang, S., Lee, C.H., Hardie, R.C., and Juusola, M. (2012). Multiple spectral inputs improve motion discrimination in the Drosophila visual system. Science 336, 925-931. [CrossRef]
  106. Franceschini, N., and Kirschfeld, K. (1971). Phenomena of pseudopupil in compound eye of Drosophila. Kybernetik 9, 159-182. [CrossRef]
  107. Woodgate, J.L., Makinson, J.C., Rossi, N., Lim, K.S., Reynolds, A.M., Rawlings, C.J., and Chittka, L. (2021). Harmonic radar tracking reveals that honeybee drones navigate between multiple aerial leks. Iscience 24. ARTN 102499. [CrossRef]
  108. Schulte, P., Zeil, J., and Sturzl, W. (2019). An insect-inspired model for acquiring views for homing. Biol Cybern 113, 439-451. [CrossRef]
  109. Boeddeker, N., Dittmar, L., Sturzl, W., and Egelhaaf, M. (2010). The fine structure of honeybee head and body yaw movements in a homing task. Proc R Soc Lond B Biol Sci 277, 1899-1906. [CrossRef]
  110. Hoekstra, H.J.W.M. (1997). On beam propagation methods for modelling in integrated optics. Opt. Quantum Electron 29, 157-171.
  111. Kirschfeld, K. (1973). [Neural superposition eye]. Fortschr Zool 21, 229-257.
  112. Song, B.M., and Lee, C.H. (2018). Toward a mechanistic understanding of color vision in insects. Front Neur Circ 12. ARTN 16. [CrossRef]
  113. Johnston, R.J., and Desplan, C. (2010). Stochastic mechanisms of cell fate specification that yield random or robust outcomes. Ann Rev Cell Dev Biol 26, 689-719. [CrossRef]
  114. Galton, F. (1907). Vox populi. Nature 450-451.
  115. Vasiliauskas, D., Mazzoni, E.O., Sprecher, S.G., Brodetskiy, K., Johnston, R.J., Lidder, P., Vogt, N., Celik, A., and Desplan, C. (2011). Feedback from rhodopsin controls rhodopsin exclusion in Drosophila photoreceptors. Nature 479, 108-112. [CrossRef]
  116. Dippé, M.A.Z., and Wold, E.H. (1985). Antialiasing through stochastic sampling. ACM SIGGRAPH Computer Graphics 19, 69-78. [CrossRef]
  117. Field, G.D., Gauthier, J.L., Sher, A., Greschner, M., Machado, T.A., Jepson, L.H., Shlens, J., Gunning, D.E., Mathieson, K., Dabrowski, W., et al. (2010). Functional connectivity in the retina at the resolution of photoreceptors. Nature 467, 673-677. [CrossRef]
  118. Götz, K.G. (1968). Flight control in Drosophila by visual perception of motion. Kybernetik 6, 199-208.
  119. Hengstenberg, R. (1971). Eye muscle system of housefly Musca Domestica .1. Analysis of clock spikes and their source. Kybernetik 9, 56-77. [CrossRef]
  120. Colonnier, F., Manecy, A., Juston, R., Mallot, H., Leitel, R., Floreano, D., and Viollet, S. (2015). A small-scale hyperacute compound eye featuring active eye tremor: application to visual stabilization, target tracking, and short-range odometry. Bioinspir Biomim 10. Artn 026002. [CrossRef]
  121. Viollet, S., Godiot, S., Leitel, R., Buss, W., Breugnon, P., Menouni, M., Juston, R., Expert, F., Colonnier, F., L’Eplattenier, G., et al. (2014). Hardware architecture and cutting-edge assembly process of a tiny curved compound eye. Sensors-Basel 14, 21702-21721. [CrossRef]
  122. Talley, J., Pusdekar, J., Feltenberger, A., Ketner, N., Evers, J., Liu, M., Gosh, A., Palmer, S.E., Wardill, T.J., and Gonzalez-Bellido, P.T. (2023). Predictive saccades and decision making in the beetle-predating saffron robber fly. Curr Biol 33, 1-13. [CrossRef]
  123. Glasser, A. (2010). Accommodation. In Encyclopedia of the Eye, D.A. Dartt, ed. (Academic Press), pp. 8-17. [CrossRef]
  124. Osorio, D. (2007). Spam and the evolution of the fly’s eye. Bioessays 29, 111-115. [CrossRef]
  125. Zelhof, A.C., Hardy, R.W., Becker, A., and Zuker, C.S. (2006). Transforming the architecture of compound eyes. Nature 443, 696-699. [CrossRef]
  126. Kolodziejczyk, A., Sun, X., Meinertzhagen, I.A., and Nassel, D.R. (2008). Glutamate, GABA and acetylcholine signaling components in the lamina of the Drosophila visual system. PLoS One 3, e2110. [CrossRef]
  127. de Polavieja, G.G. (2006). Neuronal algorithms that detect the temporal order of events. Neural Comp 18, 2102-2121.
  128. Yang, H.H., and Clandinin, T.R. (2018). Elementary Motion Detection in Drosophila: Algorithms and Mechanisms. Annu Rev Vis Sci 4, 143-163. [CrossRef]
  129. Hassenstein, B., and Reichardt, W. (1956). Systemtheoretische Analyse der Zeit-, Reihenfolgen- und Vorzeichenauswertung bei der Bewegungsperzeption des Rüsselkäfers Chlorophanus. Z Naturforsch 11b, 513-524.
  130. Barlow, H.B., and Levick, W.R. (1965). The mechanism of directionally selective units in rabbit’s retina. J Physiol 178, 477-504. [CrossRef]
  131. Leung, A., Cohen, D., van Swinderen, B., and Tsuchiya, N. (2021). Integrated information structure collapses with anesthetic loss of conscious arousal in Drosophila melanogaster. Plos Comput Biol 17, e1008722. [CrossRef]
  132. Maye, A., Hsieh, C.H., Sugihara, G., and Brembs, B. (2007). Order in spontaneous behavior. PLoS One 2, e443. [CrossRef]
  133. van Hateren, J.H. (2017). A unifying theory of biological function. Biol Theory 12, 112-126. [CrossRef]
  134. van Hateren, J.H. (2019). A theory of consciousness: computation, algorithm, and neurobiological realization. Biol Cybern 113, 357-372. [CrossRef]
  135. Okray, Z., Jacob, P.F., Stern, C., Desmond, K., Otto, N., Talbot, C.B., Vargas-Gutierrez, P., and Waddell, S. (2023). Multisensory learning binds neurons into a cross-modal memory engram. Nature 617, 777-784. [CrossRef]
  136. de Polavieja, G.G. (2002). Errors drive the evolution of biological signalling to costly codes. J Theor Biol 214, 657-664.
  137. de Polavieja, G.G. (2004). Reliable biological communication with realistic constraints. Phys Rev E 70, 061910.
  138. Li, X., Abou Tayoun, A., Song, Z., Dau, A., Rien, D., Jaciuch, D., Dongre, S., Blanchard, F., Nikolaev, A., Zheng, L., et al. (2019). Ca2+-activated K+ channels reduce network excitability, improving adaptability and energetics for transmitting and perceiving sensory information. J Neurosci 39, 7132-7154. [CrossRef]
  139. Shannon, C.E. (1948). A mathematical theory of communication. Bell Syst Technic J 27, 379–423, 623–656.
  140. Brenner, N., Bialek, W., and de Ruyter van Steveninck, R. (2000). Adaptive rescaling maximizes information transmission. Neuron 26, 695-702. [CrossRef]
  141. Maravall, M., Petersen, R.S., Fairhall, A.L., Arabzadeh, E., and Diamond, M.E. (2007). Shifts in coding properties and maintenance of information transmission during adaptation in barrel cortex. PLoS Biol 5, e19. [CrossRef]
  142. Arganda, S., Guantes, R., and de Polavieja, G.G. (2007). Sodium pumps adapt spike bursting to stimulus statistics. Nat Neurosci 10, 1467-1473. [CrossRef]
  143. Laughlin, S.B. (1981). A simple coding procedure enhances a neuron’s information capacity. Zeitschrift für Naturforschung C 36, 910-912.
  144. van Hateren, J.H. (1992). A theory of maximizing sensory information. Biol Cybern 68, 23-29. [CrossRef]
  145. Hopfield, J.J., and Brody, C.D. (2001). What is a moment? Transient synchrony as a collective mechanism for spatiotemporal integration. Proc Natl Acad Sci U S A 98, 1282-1287. [CrossRef]
  146. Srinivasan, M.V., Laughlin, S.B., and Dubs, A. (1982). Predictive coding: a fresh view of inhibition in the retina. Proc R Soc Lond B Biol Sci 216, 427-459. [CrossRef]
  147. Mann, K., Deny, S., Ganguli, S., and Clandinin, T.R. (2021). Coupling of activity, metabolism and behaviour across the Drosophila brain. Nature 593, 244-248. [CrossRef]
  148. Poulet, J.F., and Hedwig, B. (2006). The cellular basis of a corollary discharge. Science 311, 518-522. [CrossRef]
  149. Poulet, J.F., and Hedwig, B. (2007). New insights into corollary discharges mediated by identified neural pathways. Trends Neurosci 30, 14-21. [CrossRef]
  150. Peyrache, A., Dehghani, N., Eskandar, E.N., Madsen, J.R., Anderson, W.S., Donoghue, J.A., Hochberg, L.R., Halgren, E., Cash, S.S., and Destexhe, A. (2012). Spatiotemporal dynamics of neocortical excitation and inhibition during human sleep. Proc Natl Acad Sci U S A 109, 1731-1736. [CrossRef]
  151. Gallego-Carracedo, C., Perich, M.G., Chowdhury, R.H., Miller, L.E., and Gallego, J.A. (2022). Local field potentials reflect cortical population dynamics in a region-specific and frequency-dependent manner. Elife 11. [CrossRef]
  152. Yap, M.H.W., Grabowska, M.J., Rohrscheib, C., Jeans, R., Troup, M., Paulk, A.C., van Alphen, B., Shaw, P.J., and van Swinderen, B. (2017). Oscillatory brain activity in spontaneous and induced sleep stages in flies. Nat Commun 8, 1815. [CrossRef]
  153. Miller, E.K., Brincat, S.L., and Roy, J.E. (2024). Cognition is an emergent property. Current Opinion in Behavioral Sciences, 101388. [CrossRef]
  154. Pinotsis, D.A., Fridman, G., and Miller, E.K. (2023). Cytoelectric coupling: Electric fields sculpt neural activity and “tune” the brain’s infrastructure. Prog Neurobiol 226, 102465. [CrossRef]
  155. Solvi, C., Al-Khudhairy, S.G., and Chittka, L. (2020). Bumble bees display cross-modal object recognition between visual and tactile senses. Science 367, 910-912. [CrossRef]
  156. Badre, D., Bhandari, A., Keglovits, H., and Kikumoto, A. (2021). The dimensionality of neural representations for control. Curr Opin Behav Sci 38, 20-28. [CrossRef]
  157. Yellott, J.I. (1982). Spectral-analysis of spatial sampling by photoreceptors - topological disorder prevents aliasing. Vis Res 22, 1205-1210. [CrossRef]
  158. Wikler, K.C., and Rakic, P. (1990). Distribution of photoreceptor subtypes in the retina of diurnal and nocturnal primates. J Neurosci 10, 3390-3401. [CrossRef]
  159. Kim, Y.J., Peterson, B.B., Crook, J.D., Joo, H.R., Wu, J., Puller, C., Robinson, F.R., Gamlin, P.D., Yau, K.W., Viana, F., et al. (2022). Origins of direction selectivity in the primate retina. Nat Commun 13, 2862. [CrossRef]
  160. Yu, W.Q., Swanstrom, R., Sigulinsky, C.L., Ahlquist, R.M., Knecht, S., Jones, B.W., Berson, D.M., and Wong, R.O. (2023). Distinctive synaptic structural motifs link excitatory retinal interneurons to diverse postsynaptic partner types. Cell Rep 42, 112006. [CrossRef]
  161. Niven, J.E., and Chittka, L. (2010). Reuse of identified neurons in multiple neural circuits. Behavioral and Brain Sciences, 4.
  162. Pfeiffer, K., and Homberg, U. (2014). Organization and Functional Roles of the Central Complex in the Insect Brain. Annual Review of Entomology, Vol 59, 2014 59, 165-U787. [CrossRef]
  163. Li, F., Lindsey, J., Marin, E.C., Otto, N., Dreher, M., Dempsey, G., Stark, I., Bates, A.S., Pleijzier, M.W., Schlegel, P., et al. (2020). The connectome of the adult mushroom body provides insights into function. Elife 9. ARTN e62576. [CrossRef]
  164. Kohonen, T. (2006). Self-organizing neural projections. Neural Netw 19, 723-733. [CrossRef]
  165. Li, M., Liu, F., Juusola, M., and Tang, S. (2014). Perceptual color map in macaque visual area V4. J Neurosci 34, 202-217. [CrossRef]
  166. Dan, C., Hulse, B.K., Kappagantula, R., Jayaraman, V., and Hermundstad, A.M. (2024). A neural circuit architecture for rapid learning in goal-directed navigation. Neuron 112, 2581-2599 e2523. [CrossRef]
  167. Hulse, B.K., Haberkern, H., Franconville, R., Turner-Evans, D., Takemura, S.Y., Wolff, T., Noorman, M., Dreher, M., Dan, C., Parekh, R., et al. (2021). A connectome of the Drosophila central complex reveals network motifs suitable for flexible navigation and context-dependent action selection. Elife 10. [CrossRef]
  168. Scheffer, L.K., Xu, C.S., Januszewski, M., Lu, Z., Takemura, S.Y., Hayworth, K.J., Huang, G.B., Shinomiya, K., Maitlin-Shepard, J., Berg, S., et al. (2020). A connectome and analysis of the adult Drosophila central brain. Elife 9. [CrossRef]
  169. Kim, S.S., Hermundstad, A.M., Romani, S., Abbott, L.F., and Jayaraman, V. (2019). Generation of stable heading representations in diverse visual scenes. Nature 576, 126-131. [CrossRef]
  170. Kim, S.S., Rouault, H., Druckmann, S., and Jayaraman, V. (2017). Ring attractor dynamics in the Drosophila central brain. Science 356, 849-853. [CrossRef]
  171. Seelig, J.D., and Jayaraman, V. (2015). Neural dynamics for landmark orientation and angular path integration. Nature 521, 186-191. [CrossRef]
  172. Seelig, J.D., and Jayaraman, V. (2013). Feature detection and orientation tuning in the Drosophila central complex. Nature 503, 262-266. [CrossRef]
  173. Mussells Pires, P., Zhang, L., Parache, V., Abbott, L.F., and Maimon, G. (2024). Converting an allocentric goal into an egocentric steering signal. Nature 626, 808-818. [CrossRef]
  174. Lu, J., Behbahani, A.H., Hamburg, L., Westeinde, E.A., Dawson, P.M., Lyu, C., Maimon, G., Dickinson, M.H., Druckmann, S., and Wilson, R.I. (2022). Transforming representations of movement from body- to world-centric space. Nature 601, 98-104. [CrossRef]
  175. Lyu, C., Abbott, L.F., and Maimon, G. (2022). Building an allocentric travelling direction signal via vector computation. Nature 601, 92-97. [CrossRef]
  176. Green, J., Adachi, A., Shah, K.K., Hirokawa, J.D., Magani, P.S., and Maimon, G. (2017). A neural circuit architecture for angular integration in Drosophila. Nature 546, 101-106. [CrossRef]
  177. Honkanen, A., Hensgen, R., Kannan, K., Adden, A., Warrant, E., Wcislo, W., and Heinze, S. (2023). Parallel motion vision pathways in the brain of a tropical bee. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 209, 563-591. [CrossRef]
  178. Heinze, S. (2021). Mapping the fly’s ‘brain in the brain’. Elife 10. [CrossRef]
  179. Pisokas, I., Heinze, S., and Webb, B. (2020). The head direction circuit of two insect species. Elife 9. [CrossRef]
  180. Goulard, R., Buehlmann, C., Niven, J.E., Graham, P., and Webb, B. (2021). A unified mechanism for innate and learned visual landmark guidance in the insect central complex. Plos Comput Biol 17, e1009383. [CrossRef]
  181. Cope, A.J., Sabo, C., Vasilaki, E., Barron, A.B., and Marshall, J.A.R. (2017). A computational model of the integration of landmarks and motion in the insect central complex. Plos One 12. ARTN e0172325. [CrossRef]
  182. Heinze, S., and Homberg, U. (2009). Linking the input to the output: new sets of neurons complement the polarization vision network in the locust central complex. J Neurosci 29, 4911-4921. [CrossRef]
  183. Wu, M., Nern, A., Williamson, W.R., Morimoto, M.M., Reiser, M.B., Card, G.M., and Rubin, G.M. (2016). Visual projection neurons in the Drosophila lobula link feature detection to distinct behavioral programs. Elife 5. [CrossRef]
  184. Kanerva, P. (1990). Sparce distrubuted memory (The MIT Press).
  185. Nishimoto, S., Vu, A.T., Naselaris, T., Benjamini, Y., Yu, B., and Gallant, J.L. (2011). Reconstructing visual experiences from brain activity evoked by natural movies. Curr Biol 21, 1641-1646. [CrossRef]
  186. Willmore, B.D., Prenger, R.J., and Gallant, J.L. (2010). Neural representation of natural images in visual area V2. J Neurosci 30, 2102-2114. [CrossRef]
  187. Naselaris, T., Prenger, R.J., Kay, K.N., Oliver, M., and Gallant, J.L. (2009). Bayesian reconstruction of natural images from human brain activity. Neuron 63, 902-915. [CrossRef]
  188. Alem, S., Perry, C.J., Zhu, X.F., Loukola, O.J., Ingraham, T., Sovik, E., and Chittka, L. (2016). Associative Mechanisms Allow for Social Learning and Cultural Transmission of String Pulling in an Insect. Plos Biology 14. ARTN e1002564. [CrossRef]
  189. Bridges, A.D., Royka, A., Wilson, T., Lockwood, C., Richter, J., Juusola, M., and Chittka, L. (2024). Bumblebees socially learn behaviour too complex to innovate alone. Nature 627, 572-578. [CrossRef]
  190. Bridges, A.D., MaBouDi, H., Procenko, O., Lockwood, C., Mohammed, Y., Kowalewska, A., González, J.E.R., Woodgate, J.L., and Chittka, L. (2023). Bumblebees acquire alternative puzzle-box solutions via social learning. Plos Biology 21. ARTN e300201910.1371/journal.pbio.3002019.
  191. Maák, I., Lorinczi, G., Le Quinquis, P., Módra, G., Bovet, D., Call, J., and d’Ettorre, P. (2017). Tool selection during foraging in two species of funnel ants. Anim Behav 123, 207-216. [CrossRef]
  192. Lorinczi, G., Módra, G., Juhász, O., and Maák, I. (2018). Which tools to use? Choice optimization in the tool-using ant,. Behav Ecol 29, 1444-1452. [CrossRef]
  193. Woodgate, J.L., Makinson, J.C., Lim, K.S., Reynolds, A.M., and Chittka, L. (2017). Continuous Radar Tracking Illustrates the Development of Multi-destination Routes of Bumblebees. Sci Rep-Uk 7. ARTN 17323. [CrossRef]
  194. Loukola, O.J., Solvi, C., Coscos, L., and Chittka, L. (2017). Bumblebees show cognitive flexibility by improving on an observed complex behavior. Science 355, 833-836. [CrossRef]
  195. Hadjitofi, A., and Webb, B. (2024). Dynamic antennal positioning allows honeybee followers to decode the dance. Curr Biol 34, 1772-1779 e1774. [CrossRef]
  196. Suver, M.P., Medina, A.M., and Nagel, K.I. (2023). Active antennal movements in Drosophila can tune wind encoding. Curr Biol 33, 780-789 e784. [CrossRef]
  197. Perez-Escudero, A., Rivera-Alba, M., and de Polavieja, G.G. (2009). Structure of deviations from optimality in biological systems. Proc Natl Acad Sci U S A 106, 20544-20549. [CrossRef]
  198. Razban Haghighi, K. (2023). The Drosophila visual system: a super-efficient encoder. PhD (University of Sheffield).
  199. Kapustjansky, A., Chittka, L., and Spaethe, J. (2010). Bees use three-dimensional information to improve target detection. Naturwissenschaften 97, 229-233. [CrossRef]
  200. Chittka, L., and Spaethe, J. (2007). Visual search and the importance of time in complex decision making by bees. Arthropod-Plant Inte 1, 37-44. [CrossRef]
  201. de Polavieja, G.G., Harsch, A., Kleppe, I., Robinson, H.P., and Juusola, M. (2005). Stimulus history reliably shapes action potential waveforms of cortical neurons. J Neurosci 25, 5657-5665. [CrossRef]
  202. Juusola, M., Robinson, H.P., and de Polavieja, G.G. (2007). Coding with spike shapes and graded potentials in cortical networks. Bioessays 29, 178-187. [CrossRef]
  203. de Croon, G.C.H.E., Dupeyroux, J.J.G., Fuller, S.B., and Marshall, J.A.R. (2022). Insect-inspired AI for autonomous robots. Sci Robot 7. ARTN eabl6334. [CrossRef]
  204. Webb, B. (2020). Robots with insect brains. Science 368, 244-245. [CrossRef]
  205. Land, M.F. (2009). Vision, eye movements, and natural behavior. Visual Neurosci 26, 51-62. [CrossRef]
  206. Medathati, N.V.K., Neumann, H., Masson, G.S., and Kornprobst, P. (2016). Bio-inspired computer vision: Towards a synergistic approach of artificial and biological vision. Comput Vis Image Und 150, 1-30. [CrossRef]
  207. Serres, J.R., and Viollet, S. (2018). Insect-inspired vision for autonomous vehicles. Curr Opin Insect Sci 30, 46-51. [CrossRef]
  208. Song, Y.M., Xie, Y.Z., Malyarchuk, V., Xiao, J.L., Jung, I., Choi, K.J., Liu, Z.J., Park, H., Lu, C.F., Kim, R.H., et al. (2013). Digital cameras with designs inspired by the arthropod eye. Nature 497, 95-99. [CrossRef]
  209. MaBouDi, H., Roper, M., Guiraud, M., Marshall, J.A., and Chittka, L. (2021). Automated video tracking and flight analysis show how bumblebees solve a pattern discrimination task using active vision. bioRxiv. [CrossRef]
  210. MaBouDi, H., Roper, M., Guiraud, M.-G., Chittka, L., and Marshall, J.A. (2023). A neuromorphic model of active vision shows spatio-temporal encoding in lobula neurons can aid pattern recognition in bees. bioRxiv. [CrossRef]
  211. Schuman, C.D., Kulkarni, S.R., Parsa, M., Mitchell, J.P., Date, P., and Kay, B. (2022). Opportunities for neuromorphic computing algorithms and applications (vol 2, pg 10, 2022). Nat Comput Sci 2, 205-205. [CrossRef]
  212. Wang, H., Sun, B., Ge, S.S., Su, J., and Jin, M.L. (2024). On non-von Neumann flexible neuromorphic vision sensors. Npj Flex Electron 8. ARTN 28. [CrossRef]
  213. Millidge, B., Seth, A., and Buckley, C.L. (2022). Predictive Coding: A Theoretical and Experimental Review. arXiv 2107.12979. http://arxiv.org/abs/2107.12979.
  214. Rao, R.P.N. (2024). A sensory-motor theory of the neocortex. Nat Neurosci 27, 1221-1235. [CrossRef]
  215. Pfeifer, R., Lungarella, M., and Iida, F. (2007). Self-organization, embodiment, and biologically inspired robotics. Science 318, 1088-1093. [CrossRef]
  216. Greenwald, A.G. (1970). Sensory Feedback Mechanisms in Performance Control - with Special Reference to Ideo-Motor Mechanism. Psychol Rev 77, 73-99. [CrossRef]
Figure 1. Sensory cells and central neurons, along with their morphodynamic components, dynamically respond to changes in information flow through phasic mechanical movements (A-D), exerting influence on sensory perception and behaviour (E-J). (A) Both vertebrate[7,9] (human cones; rods in clawed frogs) and invertebrate (open-rhabdoms in fruit flies; fused apposition-type in rhabdoms honeybee) photoreceptors[8,9,13,15] exhibit ultrafast photomechanical movements in response to changes in light intensity. (B) Outer hair cells[21,22] contract and elongate, amplifying variations in soundwave signals[6]. (C) Dendritic spines undergo twitching[10,11] motions while sampling synaptic information. (D) Synapses undergo ultrafast structural changes and tissue movements, actively participating in and optimising information transmission[11,17,18,19,20,23,24,25]. (E) Synaptic transmission depends on tissue contractility[26]. Treatment with blebbistatin inhibits this contractility, effectively silencing synapses[26]. (F) Rats[27,28] and humans employ quick sniffs to enhance odour detection. (G) Rodents’ fast whisking motion enhances the perception of environmental structure[29]. (H) Snakes flick their tongues to localise the source of odours better[30]. (I) Larvae perform rapid head casting to determine the direction towards higher food concentration[31,32]. (J) Flies[33] utilise fast saccadic eye and body movements to observe the world[34]. (K) During goal-oriented behaviours, flies use intraocular muscle-induced whole-retina movements (mini-arrows) that are larger than photoreceptor microsaccades (A), as observed in binocular vergence (left retina: red; right retina: blue), to enhance perception[35,36,37]. (L) Humans[38] perceive the world through saccadic eye movements[34], where microsaccadic jitter (eyeball tremor) enhances the temporal coding of the visual space[39,40] and improves the discrimination of high-frequency visual patterns[41,42]. (M) Rhythmic sexual movements, such as those of mice, activate frequency-specific Krause corpuscles (tuned to dynamic, light touch and mechanical vibration) in the genitalia[43], enhancing tactile sensing and pleasure. Note that the human face in (A) and (L) is AI-generated and not real. Data are modified from the cited papers.
Figure 1. Sensory cells and central neurons, along with their morphodynamic components, dynamically respond to changes in information flow through phasic mechanical movements (A-D), exerting influence on sensory perception and behaviour (E-J). (A) Both vertebrate[7,9] (human cones; rods in clawed frogs) and invertebrate (open-rhabdoms in fruit flies; fused apposition-type in rhabdoms honeybee) photoreceptors[8,9,13,15] exhibit ultrafast photomechanical movements in response to changes in light intensity. (B) Outer hair cells[21,22] contract and elongate, amplifying variations in soundwave signals[6]. (C) Dendritic spines undergo twitching[10,11] motions while sampling synaptic information. (D) Synapses undergo ultrafast structural changes and tissue movements, actively participating in and optimising information transmission[11,17,18,19,20,23,24,25]. (E) Synaptic transmission depends on tissue contractility[26]. Treatment with blebbistatin inhibits this contractility, effectively silencing synapses[26]. (F) Rats[27,28] and humans employ quick sniffs to enhance odour detection. (G) Rodents’ fast whisking motion enhances the perception of environmental structure[29]. (H) Snakes flick their tongues to localise the source of odours better[30]. (I) Larvae perform rapid head casting to determine the direction towards higher food concentration[31,32]. (J) Flies[33] utilise fast saccadic eye and body movements to observe the world[34]. (K) During goal-oriented behaviours, flies use intraocular muscle-induced whole-retina movements (mini-arrows) that are larger than photoreceptor microsaccades (A), as observed in binocular vergence (left retina: red; right retina: blue), to enhance perception[35,36,37]. (L) Humans[38] perceive the world through saccadic eye movements[34], where microsaccadic jitter (eyeball tremor) enhances the temporal coding of the visual space[39,40] and improves the discrimination of high-frequency visual patterns[41,42]. (M) Rhythmic sexual movements, such as those of mice, activate frequency-specific Krause corpuscles (tuned to dynamic, light touch and mechanical vibration) in the genitalia[43], enhancing tactile sensing and pleasure. Note that the human face in (A) and (L) is AI-generated and not real. Data are modified from the cited papers.
Preprints 117016 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated