Preprint
Article

Design of a New Digital Cognitive Screening Tool on Tablet: AlzVR Project

Altmetrics

Downloads

225

Views

95

Comments

0

Submitted:

29 April 2024

Posted:

01 May 2024

You are already at the latest version

Alerts
Abstract
Introduction: Alzheimer's disease (AD) is the first cause of dementia worldwide without any current curative treatment. Facing an increasing prevalence and its associated costs, AD represents a public health challenge. Usual diagnostic methods still rely on extended interviews and paper tests provided by an exterior examiner. We aim to create a novel, quick cognitive-screening tool on a numerical tablet. Methods: This program, built and edited with Unity®, runs on Android® for the Samsung Galaxy Tab S7 FE®. Composed of seven tasks inspired by the Mini-Mental Status Examination and the Montréal Cognitive Assessment, it browses several cognitive functions. The architectural design of this tablet application is distinguished by its multifaceted capabilities, encompassing not only seamless offline functionality but also a mechanism to ensure the singularity of data amalgamated from diverse sites. Additionally, a paramount emphasis is placed on safeguarding the confidentiality of patient information in the healthcare domain. Furthermore, the application empowers individual site managers to access and peruse specific datasets, enhancing their operational efficacy and decision-making processes. We performed a preliminary usability assessment among healthy subjects. Results: 24 healthy patients were included with a final F-SUS score of "excellent". Participants perceived the tool as simple to use and achieved the test in a mean time of 142 seconds. No technical errors occurred. Conclusion: These preliminary results suggest that our new assessment on a numerical tablet allows a short cognitive screening.
Keywords: 
Subject: Computer Science and Mathematics  -   Software

1. Introduction

Alzheimer's disease (AD) is the first cause of neurodegenerative decline, affecting millions of people worldwide with a considerable cost for countries [1]. In AD, patients progressively lose their cognitive abilities, and behavioural troubles can occur. Without any current and efficient treatment, loss of memory and autonomy becomes an essential burden for caregivers and families. AD management is a global and public challenge for health systems that face a constantly increasing prevalence in aging populations.
Precocious screening of cognitive decline leads to better and early support for patients and their families. Unfortunately, general practitioners (GPs) do not always have enough time to perform initial cognitive explorations. They address their patients at specialized centres whose appointments can be long, delaying diagnostic but also symptomatic and social measures. These labelled memory consultations are primarily available in hospitals, and the diagnostic process still relies on paper tests that involve an exterior examiner.
Numerous tests exist to assess cognition (global or precise evaluation), but the Mini-Mental Status Examination (MMSE) [2] and the Montréal Cognitive Assessment (MoCA) [3] are widely used in primary screening, and most professionals know them. Both tests share several questions and explore approximately the same cognitive functions, even though MoCA evaluates frontal deficits more precisely. They both browse several cognitive functions quickly and can be repeated through the medical following of patients. More recently, the MoCA seems to have a higher sensitivity (Se) than the MMSE in differentiating healthy subjects from demented patients, whereas MMSE still performs a higher specificity (Sp) [4]. Nevertheless, there are good correlations between the two tests [5,6].
Besides these classical evaluations, numerous authors have developed new screening tools on numerical tablets that show good correlations with usual tests. Although several recent systematic reviews have been published [7,8,9], only one meta-analysis with good results focused on digital drawing tasks [10]. In all these studies, numerical tests could run either on numerical tablets or simple computer touch screens and were self or hetero-administered. Unfortunately, few of these programs were available in French [11,12,13], limiting their use in francophone patients.
Finally, they have not exceeded the experimentation stage and are not used in daily practice by health practitioners, whereas many of these applications are already available on commercial platforms such as Apple iTunes or Google Play Store [14].
However, the usability of numerical tablets has been globally demonstrated among large populations [15], and there is better accessibility to new technologies, with most patients owning a tablet or a smartphone.
Physicians would benefit from using these innovative tools to perform early cognitive assessments in primary care before addressing their patients for specialized consultations. Digital tablet assessment should be short, reliable, and understandable for patients with cognitive tasks reproducing classical questions from usual paper tests.
Facing this lack of available digital assessments, we have developed the AlzVR project, which proposes a multimodal digital program for cognitive screening. We created and published an immersive assessment on Oculus Quest® [16,17]. We now aim to explore a new modality by creating an auto-questionnaire on a touchscreen-based application based on MMSE and MoCA.

2. Materials and Methods

We constructed our program using Unity® (v.2021.3.11) for Android® (tablet). Three scenes compose AlzVR: the welcome menu, playing scene, and F-SUS questionnaire.

2.1. Welcome Menu

When launching the application, there are three possibilities (Figure 1):
  • Supervised experience ("Expérience encadrée" ): medical questionnaire and cognitive assessment;
  • Quick experience ("Expérience rapide" ): cognitive assessment only;
  • Results ("Résultats"): results visualization.
The game consists of 3 main scenes:
  • The "Menu" scene includes the main menu, medical questionnaires, and results consultation;
  • The "InGame" scene contains the tutorial and all the user's tasks;
  • The " Survey " scene collects user feedback, which the administrator can only consult.

2.2. Medical Questionnaire

The supervised experience includes a primary medical questionnaire (Figure 2) to collect socio-demographic items (name, age, type of residence) and medical background (diagnostic, previous cognitive tests, treatments, and sensory loss).

2.3. Anonymization

After the last validation of the medical questionnaire, the program generates an automatic anonymized number composed of date and hour until seconds without integrating the name's initials. A typical anonymous number looks like YYYYMMDDHHMMSS. This process safeguards the confidentiality of patient information (personal information) and allows a further blinded analysis. In "quick experience", an "A" precedes all anonymized numbers, such as A-YYYYMMDDHHMMSS. In "supervised experience", the letter of an eventual medical center can be automatically added before the number.

2.4. Playing Scene

2.4.1. Experiences Architecture

The main module of the "InGame" scene, the GameManager, references the list of nine tasks to be performed. Although each task has a different objective, each has textual and audio instruction and then proposes none, one or several answers in the form of images or text. So, the "JExperience" parent class groups all the attributes and methods common to all the tasks. However, the specific features of each task have necessitated the creation of new classes ("JExpMonoChoice", "JExpChoiceTown", "JExpImages") inherited from "JExperience" (Figure 3).

2.4.2. General Aspect

The visual aspect should be simple without any cognitive surcharge. All scenes appear in a uniform blue background.
In all experiences, the user must select answers by touching one or several buttons. These buttons are big, allowing for an easy touch. A maximum of 8 buttons are on the screen, ensuring good visibility.
All the pictures implemented into the scenes are royalty-free.

2.4.3. Answer Modality

Since the user selects all the answers, a confirmation screen appears with a button "Yes" ("Oui") and "No" ("Non"). This step avoids inattentive answers and validates the choice (Figure 4). A "Yes" leads to the next question, and "No" allows a new chance to answer.
Each exercise lasts 30 seconds maximum. The next question automatically occurs if the user does not answer within the time (counted as TimeOut). The choice of a "No" in the confirmation step reinitializes the time, but only three attempts are allowed.
In every case (success or failure), a message "Well done!" ("Bravo!") congratulates the user (Figure 5). This message provides a cheerful ambiance and can reduce further false results of stress or fear.

2.4.4. Training Task

Although we suppose that many older adults have already used a smartphone or a tablet, we have decided to evaluate numerical abilities with two training tasks before the cognitive questionnaire. The user needs to touch shapes on the screen. With these two short exercises, we can ensure a good understanding of the tablet's functioning by the user (Figure 6). A failure in the training tasks leads to the assessment's stopping, and the test cannot continue.

2.4.5. Cognitive Questionnaire

If the training tasks are successful, the cognitive assessment begins and comprises seven tasks from the MMSE and the MoCA. We wanted a varied assessment, so we selected questions from multiple cognitive fields presented in Table 1.
The first task is the « three words » test. In the MMSE or MoCA, the examiner orally delivers the three words, and the patient must repeat them (immediately and with a delayed recall). To get a self-questionnaire, we kept the oral deliverance by the program (sound only) but replaced the oral repetition with a choice of 3 images among eight (Figure 7). There is still an immediate and delayed recall. The three words belong to different semantic fields (animal, vehicle, and vegetable)
The clock recognition (Figure 8) task is inspired by the clock drawing test [18], where the patients draw a circle, number, and needles indicating a precise hour (11h10, for example). We created a novel task proposing three different clocks: the good one (10h30), the symmetric clock (05h50), and a false clock (Figure 8). The oral instruction delivers the hour to choose ("select the clock indicating …"), and the patient selects on the screen. There are two series of clocks followed by the three words delayed recall.
To explore spatial and temporal orientation, we selected a simple format with an oral question (what is the current season? Select the country's flag where we are) and several pictures as answers. For spatial orientation, the country is represented as a flag limiting written instructions (Figure 9). Names of town are presented as classical French signs (Figure 10). The answer for the town car can be changed depending on the site assessment.
Temporal orientation tasks are relatively similar, as they select the current season (Figure 11) and present the name of the season and a typical image. Considering varying dates for season changes, we left a 48-hour margin for the answer.
In the year test, we introduced several confusing dates (minus one year, minus one century). All dates finish by the same number as the current year (Figure 12).
In the MoCA test, abstraction's ability is tested on the similarity between two words (for example, an orange and a banana are fruits). In our abstraction test, we chose a fruit series to fill with a third picture. A confusing element is among the four choices (one picture from the three-word test) (Figure 13).

2.5. Results Menu

The results' menu allows a simple visualization of the patient's score after the cognitive questionnaire (Figure 14). A password protects this section and only uses an anonymized number (ID patient). Three possibilities exist: "X" (failure), "V" (success), and "?" (Timeout).

2.6. F-SUS Questionnaire

After the cognitive tasks, we implemented the French translation of the System Usability Scale (Figure 15) [19] and the F-SUS questionnaire [20]. It evaluates global satisfaction through ten questions and five degrees of response from 1 (strongly disagree) to 5 (strongly agree). F-SUS results do not appear in the results menu and are directly stored.

2.7. Data Storage

All data are exported and stored in CSV format, allowing easy exploitation. We planned separate storage for personal information from other results (experiences and F-SUS). Thus, a blinded analysis is possible using only results files containing anonymized data (Figure 16).

2.8. Preliminary Usability Assessment

2.8.1. Study Population

We carried out an experimental, qualitative study in IBISC Laboratory (University of Évry-Paris Saclay, Department of Sciences and Technologies) among volunteers (staff and students) to assess preliminary usability using ISO 9241-11 norm [21] and the Nielsen method [22]. The tablet was a Samsung Galaxy Tab S7 FE® (screen of 315.0 mm, 2560x1600) running on Android 11 (user interface One UI 3.1).
The inclusion criteria were age > 18 years old, French language understanding, and the exclusion criteria were age < 18 years old, no understanding of the French language, and visual or hearing loss with no equipment.
Participants were recruited through mailing lists of university and advertisements in locals.

2.8.2. Stages

Participants successively and anonymously achieved several stages:
  • Pre-questionnaire: fill in an online questionnaire to collect socio-demographic data (age, profession, sex) and numerical habits (smartphone and tablets);
  • Quick experience;
  • F-SUS questionnaire;
  • Post-questionnaire: An online questionnaire will be used to collect free comments about the program.

2.8.3. Data Collection and Exploitation

During the tests, we collected the following parameters: answer (success, fail), number of trials, and response time (ms).
We chose the total F-SUS score calculated on the author's recommendation [19,20] as the primary endpoint to assess usability with a goal of 85.5 %, which is considered "excellent".
All data were blinded, collected, and analysed using the anonymized numbers of participants.

3. Results

3.1. Population

We included 24 participants between 2022/09/27 and 2022/10/12. Their socio-demographics are presented in Table 2, and numerical habits are in Table 3.

3.2. Success Rate

Cognitive tasks were completed by 100% of participants. We observed a success rate for the questions of 97.4 % (187 correct answers out of 192). The two tests that presented failures were the clock task (2 failures) and the season (3 failures).

3.3. Time of Completion

The average test administration time (excluding training tasks) was 141.47 (± 18.77) seconds, and details of task completion times are presented in Table 4.

3.4. F-SUS questionnaire

Ninety-six percent of participants completed the F-SUS questionnaire (one person left the application before completing it), and the results for each question are presented in Table 5.
The overall score on the F-SUS questionnaire was 89.24%, which is considered "excellent."

3.5. General Remarks

In the post-questionnaire, we collected general opinions about the computer program. Users overwhelmingly found the program to be easy to use. The negative remarks reported are the lack of fluidity of the oral instructions and the tests being judged too simple. User reviews are shown in Figure 17.

4. Discussion

Numerous existing paper tests assess cognition for a global screening or precisely for a specific function [23]. At the same time, several authors studied the possibility of numerical tablet use in evaluating cognitive decline and performing training tasks in healthy and cognitively impaired patients [24,25]. Despite these numerous and efficient digital tests [7], cognitive evaluations still rely on paper tests and need an exterior examiner. Facing an increasing prevalence of patients in the future decades [1] with a more and more precise diagnostic (biological, functional) [26], there is a need to get simple, quick, and performing tools to help practitioners in cognitive decline screening. During our conception, we chose to create a new tool in the French language inspired by two primary used and recommended tests [27,28]: MMSE, MoCA, and the clock drawing test (CDT) (integrated into the MoCA). In usual tests, the patient answers most of the questions orally to the examiner. We chose not to use speech recognition because of its limitations [29]. Incorrect speech interpretation would have led to false results. Excluding oral answers does not allow a global language evaluation as in MMSE or MoCA.
CDT is also hugely used in daily practice and belongs to quick screening tools such as Codex [30] or Minicog [31]. Müller et al. have proposed a digital clock drawing task using a stylus [32], showing good correlations with paper tests. This transposition still requires exterior human validation or automatic image analysis, as proposed by Park et al. [33]. We wanted a simple and short task with no exterior analysis, so we switched from a drawing task to a recognition picture task. Drawing a clock and placing needles requires visuo-spatial abilities and executive functions. Nevertheless, there were technical limitations to producing a self-administered questionnaire with few written instructions, no exterior validation, and simple orders. These limitations may lead to potential bias in cognitive evaluation with an underestimation of executive functions.
Finally, our assessment does not evaluate writing abilities because we did not want to use a stylus or further human validation. Thus, it is known that dysgraphia is a symptom of AD [34]. Despite several questions and different cognitive fields explored, our new assessment has limitations that need to be followed and may need upgrades in future versions.
Before evaluating our digital tool in an elderly population, we performed a short usability assessment of a healthy population (without cognitive decline) among university users.
Completion time should be short, and the mean time observed in our study (142 seconds) is a good result. Moreover, usability reached the global score of 89.24%, surpassing the initial objective of 85.5% and close to 90.9% (« best imaginable »).
The participants globally perceived the test as easy to use, corresponding to F-SUS scores (questions 3, 5, 7, and 8). It was a positive evaluation because participants did not know about cognitive tests and thus discovered them for the first time. These results are preliminary satisfying data, but there is a considerable limitation about the population. Indeed, our participants were young (41.88 years old), healthy, and used to touch screens. This mean age is widely below the age of AD patients (> 60 years) [35], which can explain the observed good results. They may not be transposable into an elderly population with cognitive decline and poor use of tablets. Nevertheless, our population uses tablets poorly, with less than 50% of people using them yearly (Table 3). The questions were negatively perceived as « too simple » or « too slow », due to the young age of our participants. AlzVR should be tested in an older population for usability assessment and accuracy of discrimination between healthy and demented subjects.
Although the participants were healthy, we noted errors in the clock recognition task, probably due to the needle shapes signalled in complimentary remarks (Figure 17). However, recent findings showed that students had more and more difficulties reading traditional clocks [36], and our two failed users were 24 years old. These difficulties appear in the mean task time of realization (Table 4); indeed, it is the task with the most significant difference between the minimal and maximal time of realization. Season errors may be explained by the recent season changes (summer/autumn) before the beginning of the study (Sept 27). We also found an extensive range in task time realization.
When extracting the results from the tablet, we reported no errors in the CSV files. Data were easily exploitable and well anonymized.

5. Conclusion

We have developed a new digital cognitive screening tool with preliminary good feedback among a young and healthy population. The application could also be transposed onto smartphones to enhance its diffusion and utilization. This preliminary study belongs to the global study COGNUM-AlzVR, which aims to evaluate the efficiency and relevance of two numerical programs on tablets for cognitive assessment in AD patients. The Committee for the Protection of People of Ile de France approved the multicentric project in 2022, and the study began in April 2023 (NCT06032611).

Contributorship

Conceptualization, F.M., G.L., J.D., F.D., and S.O.; methodology, F.M., G.L., J.D., and F.D.; software, G.L, J.D., and F.D.; validation, F.M., and G.L.; formal analysis, F.M.; investigation, F.M., G.L., and F.D.; resources, G.L. and F.D.; data curation, G.L.; writing—original draft preparation, F.M., and F.D.; writing—review and editing, F.M., G.L., F.D., and S.O.; visualization, F.M., and G.L.; supervision, G.L. and S.O.; project administration, G.L. and S.O.; funding acquisition, none. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding

Acknowledgments

The authors thank all participants and the Génopole (Evry-Courcouronnes, France) for their partnership with IBISC Laboratory.

Data Availability Statement

The anonymized data supporting this study's findings are available on simple request from the corresponding author, S.O. The original data are not publicly available because they contain information that could compromise the privacy of research participants.

Conflicting Interests

The authors declare no conflict of interest and have no known competing financial or personal relationships that could be viewed as influencing the work reported in this paper. This work did not receive any grant from funding agencies in the public, commercial, or not-for-profit sectors.

Ethical approval

This work has been carried out in accordance with the Declaration of Helsinki of the World Medical Association, revised in 2013 for experiments involving humans. Data exploitation was anonymous using an automatic number of participation generated from the date and hour of completion. The local University Paris-Saclay ethics committee approved all documents and protocols on 2022/07/07 (file 433). Informed consent was obtained from all subjects involved in the study. Participation was free with no remuneration.

References

  1. International, A.D.; Guerchet, M.; Prince, M.; Prina, M. Numbers of people with dementia worldwide: An update to the estimates in the World Alzheimer Report 2015. 2020 Nov 30 [cited 2022 Jul 25]. Available from: https://www.alzint.org/resource/numbers-of-people-with-dementia-worldwide/.
  2. Folstein, M.F.; Folstein, S.E.; McHugh, P.R. ‘Mini-mental state’. A practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res. 1975, 12, 189–98. [Google Scholar] [CrossRef]
  3. Nasreddine, Z.S.; Phillips, N.A.; Bédirian, V.; Charbonneau, S.; Whitehead, V.; Collin, I.; et al. The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment. J Am Geriatr Soc. 2005, 53, 695–9. [Google Scholar] [CrossRef]
  4. Ciesielska, N.; Sokołowski, R.; Mazur, E.; Podhorecka, M.; Polak-Szabela, A.; Kędziora-Kornatowska, K. Is the Montreal Cognitive Assessment (MoCA) test better suited than the Mini-Mental State Examination (MMSE) in mild cognitive impairment (MCI) detection among people aged over 60? Meta-analysis. Psychiatr Pol. 2016, 50, 1039–52. [Google Scholar] [CrossRef]
  5. Bergeron, D.; Flynn, K.; Verret, L.; Poulin, S.; Bouchard, R.W.; Bocti, C.; et al. Multicenter Validation of an MMSE-MoCA Conversion Table. J Am Geriatr Soc. 2017, 65, 1067–72. [Google Scholar] [CrossRef] [PubMed]
  6. Chua, S.I.L.; Tan, N.C.; Wong, W.T.; Allen Jr, J.C.; Quah, J.H.M.; Malhotra, R.; et al. Virtual Reality for Screening of Cognitive Function in Older Persons: Comparative Study. J Med Internet Res. 2019, 21, e14821. [Google Scholar] [CrossRef] [PubMed]
  7. Chan, J.Y.C.; Yau, S.T.Y.; Kwok, T.C.Y.; Tsoi, K.K.F. Diagnostic performance of digital cognitive tests for the identification of MCI and dementia: A systematic review. Ageing Res Rev. 2021, 72, 101506. [Google Scholar] [CrossRef] [PubMed]
  8. Amanzadeh, M.; Hamedan, M.; Mahdavi, A.; Mohammadnia, A. Digital Cognitive Tests for Dementia Screening: A Systematic Review. 2022. [Google Scholar] [CrossRef]
  9. Tsoy, E.; Zygouris, S.; Possin, K.L. Current State of Self-Administered Brief Computerized Cognitive Assessments for Detection of Cognitive Disorders in Older Adults: A Systematic Review. J Prev Alzheimers Dis. 2021, 8, 267–76. [Google Scholar] [CrossRef]
  10. Chan, J.Y.C.; Bat, B.K.K.; Wong, A.; Chan, T.K.; Huo, Z.; Yip, B.H.K.; et al. Evaluation of Digital Drawing Tests and Paper-and-Pencil Drawing Tests for the Screening of Mild Cognitive Impairment and Dementia: A Systematic Review and Meta-analysis of Diagnostic Studies. Neuropsychol Rev. 2022, 32, 566–76. [Google Scholar] [CrossRef]
  11. Rai, L.; Boyle, R.; Brosnan, L.; Rice, H.; Farina, F.; Tarnanas, I.; et al. Digital Biomarkers Based Individualized Prognosis for People at Risk of Dementia: the AltoidaML Multi-site External Validation Study. Adv Exp Med Biol. 2020, 1194, ((Rai L.; Boyle R.; Brosnan L.; Rice H.; Farina F.; Whelan R.) Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland). 157–71. [Google Scholar]
  12. Liu, X.; Chen, X.; Zhou, X.; Shang, Y.; Xu, F.; Zhang, J.; et al. Validity of the MemTrax Memory Test Compared to the Montreal Cognitive Assessment in the Detection of Mild Cognitive Impairment and Dementia due to Alzheimer’s Disease in a Chinese Cohort. J Alzheimers Dis JAD. 2021, 80, 1257–67. [Google Scholar] [CrossRef] [PubMed]
  13. Wu, Y.H.; Vidal, J.S.; De Rotrou, J.; Sikkes, S.A.M.; Rigaud, A.S.; Plichart, M. Can a tablet-based cancellation test identify cognitive impairment in older adults? PLoS ONE [Internet]. 2017, 12. Available from: https://www.embase.com/search/results?subaction=viewrecord&id=L617460519&from=export. [CrossRef] [PubMed]
  14. Thabtah, F.; Peebles, D.; Retzler, J.; Hathurusingha, C. Dementia medical screening using mobile applications: A systematic review with a new mapping model. J Biomed Inform. 2020, 111, 000. [Google Scholar] [CrossRef] [PubMed]
  15. Kortum, P.; Sorber, M. Measuring the Usability of Mobile Applications for Phones and Tablets. Int J Human–Computer Interact. 2015, 31, 518–29. [Google Scholar] [CrossRef]
  16. Maronnat, F.; Seguin, M.; Djemal, K. Cognitive tasks modelization and description in VR environment for Alzheimer’s disease state identification. In: 2020 Tenth International Conference on Image Processing Theory, Tools and Applications (IPTA). 2020. p. 1–7.
  17. Maronnat, F.; Davesne, F.; Otmane, S. Cognitive assessment in virtual environments: How to choose the Natural User Interfaces? Laval Virtual VRIC ConVRgence Proc 2022 [Internet]. 2022;1(1). Available from: https://hal.archives-ouvertes.fr/hal-03622384.
  18. Sunderland, T.; Hill, J.L.; Mellow, A.M.; Lawlor, B.A.; Gundersheimer, J.; Newhouse, P.A.; et al. Clock drawing in Alzheimer’s disease. A novel measure of dementia severity. J Am Geriatr Soc. 1989, 37, 725–9. [Google Scholar] [CrossRef] [PubMed]
  19. Brooke, J. SUS: A ‘Quick and Dirty’ Usability Scale. In Usability Evaluation In Industry; CRC Press, 1996; pp. 189–94. [Google Scholar]
  20. Gronier, G.; Baudet, A. Psychometric Evaluation of the F-SUS: Creation and Validation of the French Version of the System Usability Scale. Int J Human–Computer Interact. 2021, 37, 1571–82. [Google Scholar] [CrossRef]
  21. International Organization for Standardization I. ISO 9241-11:2018 [Internet]. 2018 [cited 2023 Feb 25]. Available from: https://www.iso.org/standard/63500.html.
  22. Valentin, A.; Lemarchand, C. La construction des échantillons dans la conception ergonomique de produits logiciels pour le grand public. Quel quantitatif pour les études qualitatives ? Trav Hum. 2010, 73, 261–90. [Google Scholar] [CrossRef]
  23. De Roeck, E.E.; De Deyn, P.P.; Dierckx, E.; Engelborghs, S. Brief cognitive screening instruments for early detection of Alzheimer’s disease: a systematic review. Alzheimers Res Ther. 2019, 11, 21. [Google Scholar] [CrossRef]
  24. Koo, B.M.; Vizer, L.M. Mobile Technology for Cognitive Assessment of Older Adults: A Scoping Review. Innov Aging. 2019, 3, igy038. [Google Scholar] [CrossRef]
  25. Wilson, S.A.; Byrne, P.; Rodgers, S.E.; Maden, M. A Systematic Review of Smartphone and Tablet Use by Older Adults With and Without Cognitive Impairment. Innov Aging. 2022, 6, igac002. [Google Scholar] [CrossRef]
  26. Dubois, B.; Villain, N.; Frisoni, G.B.; Rabinovici, G.D.; Sabbagh, M.; Cappa, S.; et al. Clinical diagnosis of Alzheimer’s disease: recommendations of the International Working Group. Lancet Neurol. 2021, 20, 484–96. [Google Scholar] [CrossRef] [PubMed]
  27. Janssen, J.; Koekkoek, P.S.; Moll van Charante, E.P.; Jaap Kappelle, L.; Biessels, G.J.; Rutten, G.E.H.M. How to choose the most appropriate cognitive test to evaluate cognitive complaints in primary care. BMC Fam Pract. 2017, 18, 101. [Google Scholar] [CrossRef]
  28. Pinto, T.C.C.; Machado, L.; Bulgacov, T.M.; Rodrigues-Júnior, A.L.; Costa, M.L.G.; Ximenes, R.C.C.; et al. Is the Montreal Cognitive Assessment (MoCA) screening superior to the Mini-Mental State Examination (MMSE) in the detection of mild cognitive impairment (MCI) and Alzheimer’s Disease (AD) in the elderly? Int Psychogeriatr. 2019, 31, 491–504. [Google Scholar] [CrossRef]
  29. Basak, S.; Agrawal, H.; Jena, S.; Gite, S.; Bachute, M.; Pradhan, B.; et al. Challenges and Limitations in Speech Recognition Technology: A Critical Review of Speech Signal Processing Algorithms, Tools and Systems. CMES-Comput Model Eng Sci [Internet]. 2023 [cited 2024 Apr 5];135(2). Available from: https://cdn.techscience.cn/ueditor/files/cmes/135-2/TSP_CMES_21755/TSP_CMES_21755.pdf.
  30. Belmin, J.; Pariel-Madjlessi, S.; Surun, P.; Bentot, C.; Feteanu, D.; Lefebvre des Noettes, V.; et al. The cognitive disorders examination (Codex) is a reliable 3-minute test for detection of dementia in the elderly (validation study on 323 subjects). Presse Medicale Paris Fr 1983. 2007, 36 9 Pt 1, 1183–90. [Google Scholar] [CrossRef] [PubMed]
  31. Borson, S.; Scanlan, J.M.; Chen, P.; Ganguli, M. The Mini-Cog as a screen for dementia: validation in a population-based sample. J Am Geriatr Soc. 2003, 51, 1451–4. [Google Scholar] [CrossRef] [PubMed]
  32. Müller, S.; Herde, L.; Preische, O.; Zeller, A.; Heymann, P.; Robens, S.; et al. Diagnostic value of digital clock drawing test in comparison with CERAD neuropsychological battery total score for discrimination of patients in the early course of Alzheimer’s disease from healthy individuals. Sci Rep. 2019, 9, 3543. [Google Scholar] [CrossRef] [PubMed]
  33. Park, I.; Lee, U. Automatic, Qualitative Scoring of the Clock Drawing Test (CDT) Based on U-Net, CNN and Mobile Sensor Data. Sensors. 2021, 21, 5239. [Google Scholar] [CrossRef]
  34. Onofri, E.; Mercuri, M.; Archer, T.; Rapp-Ricciardi, M.; Ricci, S. Legal medical consideration of Alzheimer’s disease patients’ dysgraphia and cognitive dysfunction: a 6 month follow up. Clin Interv Aging. 2016, 11, 279–84. [Google Scholar] [CrossRef]
  35. National Institute on Aging. What Are the Signs of Alzheimer’s Disease? [Internet]. [cited 2023 Mar 6]. Available from: https://www.nia.nih.gov/health/what-are-signs-alzheimers-disease.
  36. BBC news. Young can ‘only read digital clocks’. BBC News [Internet]. 2018 Apr 24 [cited 2023 Mar 6]. Available from: https://www.bbc.com/news/education-43882847.
Figure 1. Welcome menu view.
Figure 1. Welcome menu view.
Preprints 105219 g001
Figure 2. Medical Questionnaire.
Figure 2. Medical Questionnaire.
Preprints 105219 g002
Figure 3. Main classes' diagram.
Figure 3. Main classes' diagram.
Preprints 105219 g003
Figure 4. Answer modalities.
Figure 4. Answer modalities.
Preprints 105219 g004
Figure 5. Congratulations message.
Figure 5. Congratulations message.
Preprints 105219 g005
Figure 6. Training task.
Figure 6. Training task.
Preprints 105219 g006
Figure 7. Three words test.
Figure 7. Three words test.
Preprints 105219 g007
Figure 8. Clock recognition test.
Figure 8. Clock recognition test.
Preprints 105219 g008
Figure 9. Country test.
Figure 9. Country test.
Preprints 105219 g009
Figure 10. Town test.
Figure 10. Town test.
Preprints 105219 g010
Figure 11. Season test.
Figure 11. Season test.
Preprints 105219 g011
Figure 12. Year test.
Figure 12. Year test.
Preprints 105219 g012
Figure 13. Abstraction test.
Figure 13. Abstraction test.
Preprints 105219 g013
Figure 14. Results menu view.
Figure 14. Results menu view.
Preprints 105219 g014
Figure 15. F-SUS questionnaire view.
Figure 15. F-SUS questionnaire view.
Preprints 105219 g015
Figure 16. Process of data storage and anonymization.
Figure 16. Process of data storage and anonymization.
Preprints 105219 g016
Figure 17. Word cloud of user reviews.
Figure 17. Word cloud of user reviews.
Preprints 105219 g017
Table 1. Numerical cognitive tasks.
Table 1. Numerical cognitive tasks.
Paper test Cognitive function explored Numerical cognitive task
MoCA, MMSE Auditory memory and attention Three words task
(immediate recall)
MoCA Memory and attention Clock recognition
MoCA, MMSE Auditory memory and attention Three words tasks
(delayed recall)
MoCA, MMSE Spatial orientation Flags
Town
MoCA, MMSE Temporal orientation Season
Year
MoCA Abstraction Abstraction
Table 2. Socio-demographic characteristics of the population.
Table 2. Socio-demographic characteristics of the population.
Population (n = 24)
Gender (F/M) 10/14
Age* (years)
m(sd)
41.88 (13.11)
[min-max] [23-66]
Profession 2 Student
2 Ingeneer
3 Doctorant
3 Technician
5 Researcher
9 Administrative
* m = mean; sd = standard deviation; min = minimum; max = maximum.
Table 3. Numerical habits.
Table 3. Numerical habits.
Question*
Have you ever used a smartphone? (%) 100
If yes, for how many years? m(sd) 12.37 (4.8)
If yes, during 2022, how often? (%) 100 Everyday
Have you ever used a numerical tablet? (%) 96 Yes
4 No
If yes, for how many years? m(sd) 8.25 (3.13)
If yes, during 2022, how often? (%) 21.7 Everyday
13.1 Once/week
17.4 Once/month
47.8 Once/year
* m = mean; sd = standard deviation
Table 4. Tasks completion times.
Table 4. Tasks completion times.
Completion time* Cognitive task
m(sd)
[min-max]
31.98 (2.62) Three words task (immediate recall)
[25.16-38.60]
36.43 (5.49) Clock recognition (2 series)
[25.86-51.05]
17.41 (2.83) Three words task (delayed recall)
[11.04-22.65]
12.23 (1.87) Flags
[8.66-16.29]
10.91 (2.39) Town
[6.95-16.45]
10.41 (2.12) Year
[6.38-14.68]
11.73 (3.97) Season
[6.75-23.68]
10.39 (2.67) Abstraction
[6.23-15.19]
141.47 (18.77) Total
[97.64-183.58]
* m = mean; sd = standard deviation; min = minimum; max = maximum.
Table 5. F-SUS questionnaire results.
Table 5. F-SUS questionnaire results.
Question Result*
m(sd)
I think that I would like to use this system frequently. 2.70 (1.46)
I found the system unnecessarily complex. 1.52 (0.71)
I thought the system was easy to use. 4.91 (0.28)
I think that I would need the support of a technical person to be able to use this system. 1 (0)
I found the various functions in this system were well integrated. 4.65 (0.87)
I thought there was too much inconsistency in this system. 1.43 (0.97)
I would imagine that most people would learn to use this system very quickly. 4.96 (0.20)
I found the system very cumbersome to use. 1.65 (1.34)
I felt very confident using the system. 4.83 (0.38)
I needed to learn a lot of things before I could get going with this system 1.74 (1.42)
* m = mean; sd = standard deviation.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated