Preprint
Article

Continuous Detection of Subtle Differences in Chromatic Visually Evoked Potentials Applied to Closed Eyes in Healthy Volunteers

Altmetrics

Downloads

65

Views

38

Comments

0

A peer-reviewed article of this preprint also exists.

Submitted:

26 April 2024

Posted:

26 April 2024

You are already at the latest version

Alerts
Abstract
Background/Objectives: We defined the value of a machine learning algorithm to distinguish between the EEG response to no light or any light stimulation, and between red or green light stimulation in awake volunteers with closed eyelids. This new method of EEG analysis is visionary in the understanding of visual signal processing and will facilitate anaesthetic depth research. Methods: X-gradient boosting models were used to classify the cortical response to VEP stimulation (no light vs. any light stimulation; red vs. green light stimulation). For each of the two classifications, three scenarios were tested: training and prediction in all participants (all), training and prediction in one participant (individual), training across all but one participant with prediction in the participant left out (one out). Results: Ninety-four Caucasian adults were included. The machine learning algorithm had a very high predictive value and accuracy in differentiating between no light and any light stimulation (AUCROCall: 0.96; accuracyall: 0.94; AUCROCindividual: 0.96±0.05, accuracyindividual: 0.94±0.05; AUCROConeout: 0.98±0.04; accuracyo-neout:0.96±0.04). The machine learning algorithm was highly predictive and accurate in distinguishing between green and red colour stimulation (AUCROCall: 0.97; accuracyall: 0.91; AUCROCin-dividual: 0.98±0.04, accuracyindividual: 0.96±0.04; AUCROConeout: 0.96±0.05; accuracyoneout: 0.93±0.06). The predictive value and accuracy of both classification tasks was comparable between males and females. Conclusions: Machine learning algorithms could almost continuously and reliably differentiate between the cortical EEG responses to no light or any light stimulation and between green or red colour stimulation using VEPs in awake female and male volunteers with eyes closed. Our findings may open new possibilities for using VEPs in the intraoperative setting.
Keywords: 
Subject: Biology and Life Sciences  -   Life Sciences

1. Introduction

Visual evoked potentials (VEPs) record the EEG signals generated in the occipital cortex in response to light stimulation of the retina. Thus, VEPs can assess the integrity of the retina, optic nerve, visual pathways, and occipital cortex. Although mostly used in conscious patients, VEPs have also been recorded under general anaesthesia with the shortcoming of lid closure, lowering the quality of its measurements. For example, flash VEPs are used in the intraoperative setting as a monitoring tool during neurosurgical procedures, prolonged surgeries in the prone position, or as an indirect marker of intracranial hypertension [1].
However, the use of current VEP technology in the intraoperative setting is challenged by relevant shortcomings [2]. When analysing VEPs, there is no universally agreed standard to define significant changes in the EEG signal. Typically, changes in latency of selected wave peaks >1 milliseconds or amplitude reductions >50% are used to define deviations from normality or previous measurements [1]. Detection of such subtle changes in a noisy signal as the EEG is difficult and requires averaging hundreds of sweeps to identify changes in VEP morphology between different clinical situations (e.g., during neurosurgery adjacent to the optic pathways). Since each measuring process takes several minutes and interpretation usually requires a neurophysiologist, continuous monitoring and timely detection of such changes is nearly impossible.
In a proof-of-concept study, our working group found that an artificial neural network with one hidden layer could distinguish cortical colour perception by analysing a low number of chromatic VEPs in healthy volunteers visually stimulated by flickering red/black or green/black checkerboards [3]. For this classification task, only 9 stimulations were needed to find subtle differences between red and green VEPs. The classification accuracy was highest if the model was trained on the individual subject.
The aim of the present study was to improve this approach by verifying its feasibility in healthy volunteers with eyelids closed.  

2. Materials and Methods

This study was designed as an experimental healthy volunteer study, conducted at the Johannes Kepler University in Linz, Austria. Experiments were performed during the time from November 2021 until May 2022. The study protocol was reviewed and approved by the ethics committee of the Johannes Kepler University (protocol code 1201/2020, date of approval: 11/04/2020). Written informed consent was obtained from all participants before study enrolment.

Study Participants

Subjects aged between 18 and 75 years were eligible for study inclusion. Severe systemic disease (defined as American Society of Anesthesiologists physical status classification system of III or higher), a history of seizures, an ophthalmological or neurological disease, red-green colour vision deficiency, and claustrophobia were exclusion criteria. Before enrolment, all subjects underwent testing for red-green colour vision deficiency using standard Ishihara plates [4,5]. A cut-off value of ≥10 correct results out of 12 test plates was used to exclude red-green colour vision deficiency.

Experimental Setup

All experiments were conducted in a quiet, darkened room. During the experiments, only the study participant and one researcher were present in the room. Study participants were comfortably placed in the semi-recumbent position on an examination bed and were awake but had their eyelids closed during the entire period of the experiment.
Special LED goggles were developed for this experiment. They completely enclosed the orbita so that no ambient light could reach the eyes and interfere with VEP stimulation. During VEP stimulation, LEDs were emitting green and red light, which was used for visual stimulation through closed eyelids. The 3D-printed goggles frame was made of FDA-approved biopolymer, which was rimmed by skin-compatible, two-component silicone to ensure absolute darkening and elimination of ambient light pollution. Eight light-emitting diodes with a maximum illuminance of 20,000 lux and a variable stimulation frequency of 1-15 Hz were installed inside the goggles overlying each eye. A Raspberry Pi single-board computer was programmed to control the goggles’ stimulation patterns and colours. Gold cup electrodes were mounted over the mid occipital (OZ) and frontal (Fpz) scalp after roughening of the skin with a prepping gel to achieve a skin impedance <5 kΩ. Further electrodes were placed behind the ear lobes on the left (A1) and right (A2) sides, with the latter serving as the ground electrode. Cortical responses to visual stimulation were recorded over the occipital (OZ against Fpz) lobe using a high-performance and high-accuracy biosignal amplifier (g.USBamp Research; g.tec medical engineering GmbH, Schiedlberg, Austria).

Study Experiment

The experiment consisted of 10 cycles. Each cycle lasted 3 minutes and 45 seconds, during which red or green flashes were randomly applied at a frequency of 2 Hz (up to a total of 1,500 flashes) to the closed eyes of the study participants. After every cycle, the well-being of the study participants was checked, and subjects were allowed to move if necessary.

Study Endpoints

As the primary study endpoints, we aimed to define the value of a machine learning algorithm to distinguish (1) between the cortical EEG response to no light or light stimulation, and (2) between the cortical EEG response to red or green light stimulation in awake volunteers with closed eyelids. The main outcome parameters were the area under the receiver operator characteristic curve (AUCROC) and accuracy of the algorithm. Evaluation of the predictive value of single signal components as well as the influence of sex on the predictive value of the algorithm were secondary study endpoints.

Data Processing and Statistical Analysis

All raw EEG signals collected during the experiment were stored as MatLab files on a notebook. Standard pre-processing was applied. For example, a 50 Hz notch filter was applied to suppress power line interference, and a bandpass filter with cut-off frequencies of 0.1 Hz and 100 Hz, respectively, was used to cancel baseline drift and high-frequency noise. Subsequently, the recordings were sliced and aligned based on the recorded switching points (initiation of every single flash), which were captured using an electrical signal from the Raspberry PI (Figure 1). We received 3,000 single trials per subject, 1,500 for red light stimulation and 1,500 for green light stimulation. Based on the methodology used in our proof-of-concept study [3], single sweeps were averaged across ten colour stimulations to increase the signal quality. Nine-hundred sweeps, during which no stimulation took place, were taken in every subject as well.
X-gradient boosting models were then used (1) to classify the EEG response to no light stimulation vs. (red or green) light stimulation, and (2) to classify the EEG response to red vs. green light stimulation. For each of the two classifications, we tested three different scenarios: training and prediction across all participants (all), training and prediction in an individual participant (individual), and training across all but one participant with prediction performed in the participant left out of the training set (one out) [6]. For the first two scenarios (all, individual), we randomly split the cohort of measurements into a training, validation, and test dataset. The training and validation datasets were used for model training (80 % and 10 % of data, respectively), whereas the test dataset (10% of data) was used to test the algorithm. For the third scenario (one out), the training and test datasets were not randomly selected but chosen according to the subjects. Accordingly, we carried out a 94-fold cross-validation using 93 subjects as the training set and one subject as the independent test set. The AUCROC and accuracy was determined for all measurements performed in the test datasets.

3. Results

Ninety-four Caucasian subjects [male sex: 41 (43.6%); age, 28±12 years; body mass index, 23.4±3.5 kg/m²] were included in the experiment. All participants completed the entire experimental protocol. No technical errors or adverse events occurred. There were no missing values in the dataset. All participants tolerated the stimulation well, and no measurements had to be interrupted due to physical problems.
The machine learning algorithm had a very high predictive value and accuracy in differentiating between no light stimulation and any light stimulation (Figure 2): AUC ROCall: 0.96; accuracyall: 0.94; AUCROCindividual: 0.96±0.05, accuracyindividual: 0.94±0.05; AUC ROConeout: 0.98±0.04; accuracyoneout: 0.96±0.04, respectively. Similarly, the machine learning algorithm was highly predictive and accurate in distinguishing between the cortical response to green and red colour stimulation (AUCROCall: 0.97; accuracyall: 0.91; AUCROCindividual: 0.98±0.04, accuracyindividual: 0.96±0.04; AUCROConeout: 0.96±0.05; accuracyoneout: 0.93±0.06, respectively) (Figure 3).
The predictive value and accuracy of both classification tasks was comparable between male and female participants (Figure 4). All components of the EEG signal were significant features informing the machine learning algorithm.

4. Discussion

In this experimental study, machine learning algorithms achieved an excellent predictive value and a very high accuracy to detect VEPs in the EEG signal and to distinguish between the VEP responses to red and green colour stimulation in awake volunteers with closed eyelids. Although the AUCROC was highest when algorithms were trained on the same individual in whom the prediction was made, the algorithms’ predictive value and accuracy remained very high when it was trained on 93 study participants and applied to one individual not included in the training and validation set.
Similar to our proof-of-concept study, we chose a machine learning method to build the prediction algorithms. However, unlike in the previous study, in which a single layer neuronal network was applied [3], we used the tree-based X-gradient boosting model to analyse the raw EEG data. Gradient boosting models are effective in analysing biosignals and allow to build simpler algorithms from complex raw signals than neuronal networks. When comparing the predictive value and accuracy of the machine learning algorithms in this to the ones reported in our previous study, the gradient boosting model exhibited a higher AUCROC and accuracy than the neuronal network when classifying the VEP response to green or red colour stimulation in one individual when the algorithm was trained on the remaining study population (AUC ROC: 0.96±0.05; accuracy: 0.93±0.06 vs. AUCROC: 0.71±0.12; accuracy: 0.71±0.12) [3].
All components of the cortical EEG signal were important to inform the machine learning algorithms. Future work needs to elucidate whether feature reductions of the raw dataset, for example by using a principal component or wavelet analysis, could allow to focus on fewer components of the raw signal and therefore permit to build more complex machine learning algorithms with even better classification abilities.
Three aspects of our study are novel and deserve to be highlighted when comparing this experiment to previous research on chromatic VEPs [7,8,9,10]. First and similar to our proof-of-concept study, the algorithm applied in the present experimental set-up could reliably classify the EEG response to visual stimulation using only very few VEPs (<10 VEPs at 2 Hz). This allowed for VEP interpretation at intervals as short as <5 seconds. In contrast to standard, intermittent VEP interpretation by either neurophysiologists or other machine learning models [11], our algorithm thus allows for nearly continuous VEP analysis. Despite of the extremely short intervals required for signal analysis; the algorithms could reliably classify the VEP response even in subjects in which they had not previously been trained on.
Second, the machine learning algorithms in this study could not only distinguish between no light or any light stimulation but also reliably classify the cortical VEP response to green and red light stimulation. The ability of the algorithms to correctly differentiate between VEP responses in a binary and qualitative fashion is likely to increase its sensitivity to detect even subtle functional changes in the optic pathway. Accordingly, chromatic VEPs were reported to be more sensitive than standard VEPs to detect colour vision deficiencies in infants [5], short-term variations in blood glucose levels in diabetic patients [12], various forms of optic neuropathy [7,8], acquired deficiencies in colour vision capacity [13], and early visual disturbances in patients with Parkinson's disease [9].
Third, visual stimulation in our experiment was performed when study subjects had their eyelids closed. This is an essential difference to standard VEP practice in which patients are examined with eyelids opened, which makes this technique particularly interesting for intraoperative use in the anaesthetised patient. During surgery, almost continuous VEP analysis could, for example, be advantageous in accurately monitoring the integrity of the optic pathways during neurosurgery [14] or prolonged prone positioning [15]. When considering the algorithm’s ability to differentiate between red and green light stimulation, it appears worthwhile to evaluate its usefulness as a depth of anaesthesia monitor [16]. One could hypothesise that the cortical response to green and red light stimulation is lost at more superficial anaesthetic depths than the cortical response to no or any light stimulation.
The sex of the study participants influenced neither the predictive value nor the accuracy of the machine learning algorithms. As our analysis included only young, healthy volunteers, the results of our analysis cannot be extrapolated to elderly patients [17,18] and those with relevant comorbidities. As surgical procedures are commonly performed in older adults and patients suffering from relevant comorbidities [17], future experiments need to test the performance of our algorithm in these populations, too. In addition, our study cohort did not include subjects with red-green colour vision deficiency, a common pathology among Caucasian men with a prevalence of up to 8% [19]. In the future, the presence of red-green colour vision deficiency needs to be elucidated to determine whether the predictive value of these machine learning-based algorithms is being influenced.
Further limitations need to be considered when interpreting the findings of our study. Although we used standard anatomical landmarks for electrode placement [20] and assured a low skin impedance before measurements were taken, we did not investigate whether further quality assurance steps for electrode mounting are necessary to achieve a better raw EEG signal for analysis. In addition, we used green and red light for chromatic VEP stimulation based on previous neurophysiologic findings [21] and simple pilot tests analysing which light stimulations resulted in cortical VEP responses most different from each other. Therefore, we cannot exclude that using colours other than red and green might have resulted in an even better classification ability of the machine learning algorithms.

5. Conclusions

In conclusion, machine learning algorithms could almost continuously and reliably differentiate between the cortical EEG responses to no light or any light stimulation as well as between green or red colour stimulation using VEPs in awake male and female volunteers when their eyes were closed. Our findings may open new possibilities for the use of VEPs in the intraoperative setting.

Author Contributions

Conceptualization, S.K., J.M., C.B., M.B. and M.D.; methodology, J.M., M.B. and C.B.; software, J.M. and C.B.; validation, C.S., L.K. and M.D.; formal analysis, J.M.; investigation, S.K., C.S. and L.K.; resources, J.M.; data curation, C.B.; writing—original draft preparation, S.K., C.B., M.B., C.S., L.K., M.D., and J.M.; writing—review and editing, M.D.; visualization, C.D.; supervision, J.M., M.D., M.B.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of the Johannes Kepler University (protocol code 1201/2020, date of approval: 11/04/2020).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Anonymized data not published within this article will be made available by request form any qualified investigator.

Acknowledgments

The authors wish to express their sincere thanks to Mr. Mathias Fleischer who designed and built the goggles as well as to all participants who volunteered to take part in this experiment.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kumar, A.; Bhattacharya, A.; Makhija, N. Evoked Potential Monitoring in Anaesthesia and Analgesia: Evoked Potential Monitoring. Anaesthesia 2000, 55, 225–241. [CrossRef]
  2. Chung, S.-B.; Park, C.-W.; Seo, D.-W.; Kong, D.-S.; Park, S.-K. Intraoperative Visual Evoked Potential Has No Association with Postoperative Visual Outcomes in Transsphenoidal Surgery. Acta Neurochir. (Wien) 2012, 154, 1505–1510. [CrossRef]
  3. Böck, C.; Meier, L.; Kalb, S.; Vosko, M.R.; Tschoellitsch, T.; Huemer, M.; Meier, J. Machine Learning Based Color Classification by Means of Visually Evoked Potentials. Appl. Sci. 2021, 11, 11882. [CrossRef]
  4. Nakajima, A.; Ichikawa, H.; Nakagawa, O.; Majima, A.; Watanabe, M. Ishihara Test in Color-Vision Defects*. Am. J. Ophthalmol. 1960, 49, 921–929. [CrossRef]
  5. Tekavčič Pompe, M.; Stirn Kranjc, B.; Brecelj, J. Chromatic VEP in Children with Congenital Colour Vision Deficiency: Chromatic VEP in Colour Deficient Children. Ophthalmic Physiol. Opt. 2010, 30, 693–698. [CrossRef]
  6. Cawley, G.C.; Talbot, N.L.C. On Over-FItting in Model Selection and Subsequent Selection Bias in Performance Evaluation. 29.
  7. Tekavčič Pompe, M.; Perovšek, D.; Šuštar, M. Chromatic Visual Evoked Potentials Indicate Early Dysfunction of Color Processing in Young Patients with Demyelinating Disease. Doc. Ophthalmol. 2020, 141, 157–168. [CrossRef]
  8. Yu, Y.; Shi, B.; Cheng, S.; Liu, Y.; Zhu, R.; You, Y.; Chen, J.; Pi, X.; Wang, X.; Jiang, F. Chromatic Visual Evoked Potentials Identify Optic Nerve Dysfunction in Patients with Graves’ Orbitopathy. Int. Ophthalmol. 2022, 42, 3713–3724. [CrossRef]
  9. Sartucci, F.; Porciatti, V. Visual-Evoked Potentials to Onset of Chromatic Red-Green and Blue-Yellow Gratings in Parkinson’s Disease Never Treated With L-Dopa. J. Clin. Neurophysiol. 2006, 23, 431–436. [CrossRef]
  10. Sutterer, D.W.; Coia, A.J.; Sun, V.; Shevell, S.K.; Awh, E. Decoding Chromaticity and Luminance from Patterns of EEG Activity. Psychophysiology 2021, 58, e13779. [CrossRef]
  11. Klistorner, S.; Eghtedari, M.; Graham, S.L.; Klistorner, A. Analysis of Multifocal Visual Evoked Potentials Using Artificial Intelligence Algorithms. Transl. Vis. Sci. Technol. 2022, 11, 10. [CrossRef]
  12. Schneck, M.E.; Fortune, B.; Switkes, E.; Crognale, M.; Adams, A.J. Acute Effects of Blood Glucose on Chromatic Visually Evoked Potentials in Persons With Diabetes and in Normal Persons. Invest. Ophthalmol. 1997, 38.
  13. Crognale, M.A.; Duncan, C.S.; Shoenhard, H.; Peterson, D.J.; Berryhill, M.E. The Locus of Color Sensation: Cortical Color Loss and the Chromatic Visual Evoked Potential. J. Vis. 2013, 13, 15–15. [CrossRef]
  14. Gupta, M.; Ireland, A.C.; Bordoni, B. Neuroanatomy, Visual Pathway. In StatPearls; StatPearls Publishing: Treasure Island (FL), 2024.
  15. Soffin, E.M.; Emerson, R.G.; Cheng, J.; Mercado, K.; Smith, K.; Beckman, J.D. A Pilot Study to Record Visual Evoked Potentials during Prone Spine Surgery Using the SightSaverTM Photic Visual Stimulator. J. Clin. Monit. Comput. 2018, 32, 889–895. [CrossRef]
  16. Luo, Y.; Regli, L.; Bozinov, O.; Sarnthein, J. Clinical Utility and Limitations of Intraoperative Monitoring of Visual Evoked Potentials. PLOS ONE 2015, 10, e0120525. [CrossRef]
  17. Crognale, M.A. Development, Maturation, and Aging of Chromatic Visual Pathways: VEP Results. J. Vis. 2002, 2, 2. [CrossRef]
  18. Crognale, M.A.; Page, J.W.; Fuhrel, A.A. Aging of the Chromatic Onset Visual Evoked Potential: Optom. Vis. Sci. 2001, 78, 442–446. [CrossRef]
  19. Birch, J. Worldwide Prevalence of Red-Green Color Deficiency. J. Opt. Soc. Am. A 2012, 29, 313. [CrossRef]
  20. Nunez, V.; Shapley, R.M.; Gordon, J. Nonlinear Dynamics of Cortical Responses to Color in the Human cVEP. J. Vis. 2017, 17, 9. [CrossRef]
  21. Perry, N.W.; Childers, D.G.; Falgout, J.C. Chromatic Specificity of the Visual Evoked Response. Science 1972, 177, 813–815. [CrossRef]
Figure 1. Experimental set-up (A) and stimulation sequence (B).
Figure 1. Experimental set-up (A) and stimulation sequence (B).
Preprints 104910 g001
Figure 2. Violin plots of the area under the receiver operator characteristic curve (AUC ROC) and accuracy for the prediction model including the entire study population (all), individual participants (individual), and all participants except one (one out) following no or any colour stimulation.
Figure 2. Violin plots of the area under the receiver operator characteristic curve (AUC ROC) and accuracy for the prediction model including the entire study population (all), individual participants (individual), and all participants except one (one out) following no or any colour stimulation.
Preprints 104910 g002
Figure 3. Violin plots of the area under the receiver operator characteristic curve (AUC ROC) and accuracy for the prediction model including the entire study population (all), individual participants (individual), and all participants except one (one out) following green or red light stimulation.
Figure 3. Violin plots of the area under the receiver operator characteristic curve (AUC ROC) and accuracy for the prediction model including the entire study population (all), individual participants (individual), and all participants except one (one out) following green or red light stimulation.
Preprints 104910 g003
Figure 4. Violin plots of the area under the receiver operator characteristic curve (AUC ROC) and accuracy for both prediction models including the entire study population in females and males following no or any light stimulation, as well as following green or red light stimulation.
Figure 4. Violin plots of the area under the receiver operator characteristic curve (AUC ROC) and accuracy for both prediction models including the entire study population in females and males following no or any light stimulation, as well as following green or red light stimulation.
Preprints 104910 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated