Preprint
Article

Real-Time Home Automation System using BCI Technology

Altmetrics

Downloads

223

Views

116

Comments

0

Submitted:

03 July 2024

Posted:

04 July 2024

You are already at the latest version

Alerts
Abstract
Brain-Computer Interface (BCI) processes and converts brain signals to provide 20 commands to output devices to carry out certain tasks. The main purpose of BCI is to replace or 21 restore missing or damaged functions of disabled people including neuromuscular disorders like 22 Amyotrophic Lateral Sclerosis (ALS), cerebral palsy, stroke, or spinal cord injury. Hence, BCI does 23 not use neuromuscular output pathways. Scientists have used several techniques like 24 Electroencephalography (EEG), intracortical, and Electrocorticographic (ECoG) to collect brain 25 signals which are used to control robotic arms, prosthetics, wheelchairs, and several other devices. 26 The non-invasive method of EEG is used for collecting and monitoring the signals of the brain. 27 Implementing EEG-based BCI technology in home automation systems may facilitate a wide range 28 of tasks for people with disabilities. It is important to assist and empower individuals with paralysis 29 to engage with existing home automation systems and gadgets in this particular situation. This 30 paper proposed a home security system to control a door and a light using EEG-based BCI. The 31 system prototype consists of the EMOTIV Insight™ headset, Raspberry PI 4, servo motor to 32 open/close the door, and LED. The system can be very helpful for disabled people including arm 33 amputees who cannot close/open doors or use remote control to turn on/turn off doors. The system 34 includes an application made in Flutter to receive notifications on the smartphone related to the 35 status of the door and the LEDs. The disabled person can control the door as well as the LED using 36 his/her brain signals detected by the EMOTIV Insight™ headset.
Keywords: 
Subject: Computer Science and Mathematics  -   Information Systems

1. Introduction

Brain-Computer Interfaces (BCIs) assess brain signals and provide commands to output devices to carry out certain tasks. Brain-computer interfaces do not use neuromuscular output pathways [1].
The primary objective of BCI is to substitute or reinstate functionality for those afflicted with neuromuscular conditions such as ALS, cerebral palsy, etc . Scientists have used electroencephalography, intracortical, electrocorticographic, and other brain signals to manipulate cursors, robotic legs, robotic arms, prosthetics, wheelchairs, TV remote control and several other devices since the first demonstrations of spelling and controlling individual neurons. BCIs have the potential to assist in the rehabilitation of individuals affected by stroke and other diseases. They have the potential to enhance the performance of surgeons and other medical professionals [4] since more than one billion people (about 15% of the global population) are disabled, and half of that group lacks the financial means to get adequate medical treatment, according to the World Health Organization (WHO) [5].
The rapid growth of a research and development enterprise in BCI technology generates enthusiasm among scientists, engineers, clinicians, and the public. Also, BCIs need signal-acquisition technology that is both portable and dependable, ensuring safety and reliability in any situation. Additionally, it is crucial to develop practical and viable approaches for the widespread implementation of these technologies. BCI performance must provide consistent reliability on a daily and moment-to-moment basis to align with the normal functioning of muscles. The concept of incorporating sensors and intelligence into physical objects was first introduced in the 1980s by students from Carnegie Mellon University who modified a juice vending machine so that they could remotely monitor the contents of the machine [6].
In the last decade, EEG-based BCI has been successfully used with Convolutional Neural Networking (CNN) to detect diseases like epilepsy [7]. EEG-based BCI technology has been also used to control a prosthetic lower limb [8], [9] or a prosthetic upper limb [10], As expected in future, BCI will be widely spread in our lives, improving our way of living especially for disabled people, who may do different activities by speech imagery only [11].
A network of physical objects, autos, appliances, and other things that are fitted with sensors, software, and network connections is referred to as the Internet of Things (IoT) [12]. Because of this, they are able to collect and share information. Electronic devices, which are sometimes referred to as “smart objects”, refer to a wide range of technologies. These gadgets include simple smart home devices including smart thermostats, wearable devices like as smartwatches and apparel with Radio Frequency Identification (RFID) technology, as well as complex industrial gear and transportation systems .
The Internet of Things (IoT) technology enables communication between internet-connected gadgets, as well as other devices such as smartphones and gateways. This leads to the formation of a vast interconnected system of devices that can autonomously exchange data and perform a diverse array of tasks. This includes a diverse array of applications, such as monitoring environmental conditions, improving traffic flow - by use of intelligent cars and other sophisticated automotive equipment, and tracking inventory and shipments in storage facilities, among others. For people with severe motor disabilities, having a smart home represents a necessity nowadays, they can manage not only daily used devices from home, but also, be able to manage the security of the home [15].
During the past years, many approaches have been made to control a smart object or a software application by using EEG-based BCI signals. The following paragraphs present several related works that discuss the issue of BCI home automation and security.
In 2018, Qiang Gao et al. [16], have proposed a safe and cost-effective online smart home system based on BCI to provide elder and paralyzed people with a new supportive way to control home appliances. They used the Emotiv EPOC EEG headset to detect EEG signals where these signals are denoised, processed and converted into commands. The system has the ability to identify several instructions for controlling four smart devices, including a web camera, a lamp, intelligent blinds, and guardianship telephone. Additionally, they used Power over Ethernet (PoE) technology to provide both power and connectivity to these devices. The experimental results elucidated that their proposed system obtained 86.88 ± 5.30% accuracy rate of average classification.
In 2020, K. Babu and P. Vardhini [17], have implemented a system to control a software application, which can be used further in home automation. They used a NeuroSky headset, an Arduino, an ATMEGA328P, and a laptop. Neuro software application is used to create three virtual objects represented by three icons and to control them by blinking using the headset user. Three ports from Arduino were dedicated to the three objects from the Neuro application to simulate controlling a fan, a motor, and to manage the switch between the fan and motor. Home appliance status is changed by running a MATLAB code.
Other experiments reveal an implemented prototype to control some home appliances like a LED and a fan, as the implemented system presented by Lanka et al. [18]. They used a dedicated neural headset, a laptop, an ESP32 microcontroller, a LED, and a fan. Using Bluetooth technology, they connected the headset to the laptop and this one connected with the microcontroller. The fan and LED have been wired and connected to the microcontroller. In this way, they developed a system to control a fan and a LED by a healthy, disabled, or paralysed user electric mind wave.
Eyhab Al-Masri et al. [19] published an article in 2022, where they specified the development of a BCI framework that targeted people with motor disabilities to control Philips Hue smart lights and Kasa Smart Plug using a dedicated neural headset. They used an EEG EMOTIV headset, Raspberry Pi, Kasa smart plug and Philips Hue smart lights as hardware. Bluetooth technology is used to connect the headset to the Raspberry Pi. The commands are configured and transformed from Raspberry Pi to Kasa Smart Plug and Philips Hue smart lights using Node-RED. The experimental results showed the efficacy and practicability of using EEG signals to operate IoT devices with a precision rate of 95%.
In 2023, Danish Ahmed et al. [20] have successfully used BCI technology to control light and a fan via a dedicated neural headset. The implemented system consists of an EMOTIV EPOC headset, a PC laptop, an Arduino platform, and a box that contains a light and a fan. The headset is connected to the PC via Bluetooth, the laptop uses a WebSocket server and the JSON-RPC protocol to connect to Arduino, and Arduino is wired to the light and fan. The user trained the headset to control the prototype by his/her thoughts.
A new challenge has been overcome in home automation, which is controlling a TV using brainwaves. Several papers have discussed this issue. One of the systems was presented in 2023 by Haider Abdullah et al. [21] where they successfully implemented and tested this system on 20 participants. The proposed system includes the following components: EMOTIV Insight headset, laptop - connected via Bluetooth with the headset, Raspberry Pi 4 – connected through SSH to the laptop, and TV remote control circuit - connected with wires to the Raspberry Pi. Three different brands of TVs were used in the system testing: SONY®, SHOWINC® and SAMIX®. Four controlling commands were included in this EEG-based TV remote control: open/close of the TV, volume changing and channels changing. The test showed a promising result where the system's accuracy was almost 74.9%.
The use of BCI technology for controlling different devices represents the new direction of advancement in both hardware and software development. In this context, this paper presents the design and implementation of a proposed real-time BCI-IoT system used to assure home security using a dedicated neuronal headset to control door locking and light using speech imagery. The proposed system enables disabled and paralysed people to lock or unlock a door and to turn ON/OFF an LED with the ability to receive status notifications. The proposed system has been tested on one participant. The proposed system has been simulated using a unity engine as well as the hardware implementation using Raspberry PI and other hardware components as will be discussed in the following sections.

2. Materials and Methods

The implemented system has been tested in real-time, where hardware components have been used to build the system as explained in the following subsections. Also, the system was simulated using Unity Technologies and tested by integrating the EEG headset with the simulated system as explained in subsection 2.2.3. Figure 1 shows the system’s components.

2.1. Material Used for Printing

Polylactic acid (PLA) is a common material used in printing 3D items. This material is biodegradable and produced from renewable sources like maize starch or sugar cane. While it is simple to use in 3D printing without the need for a heated platform, it shrinks in volume as it cools, which is considered a drawback [22].
PLA is suitable for many technical applications, particularly in the aviation sector, like prototyping, idea development, and manufacturing of non-structural elements and interior components. Using PLA in various applications offers benefits like decreased weight, user-friendliness, and cost savings. PLA is unsuitable for applications that involve severe circumstances or exposure to harsh chemicals, like continuous contact with water or marine environments, due to their temperature and environmental constraints [23].

2.2. The Technology Used

2.2.1. Technology Used to Design and Print the 3D Door and Frame

The door and frame models of this system were created using CATIA software. The models are exported into IdeaMaker software as “stl” files. The resulting models were manufactured using PLA material and a Creality Ender 3 S1 PRO 3D printer (Figure 2).
Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11 show the 3D print for the door and frame at different stages of printing: 10%, 60% and 100%.
Calibration of the 3D printer machine must be done using parameters from Table 1 before starting to print the door and frame.
The door model was printed on 425 layers (Figure 3, Figure 4 and Figure 5).
Frame model was split into two parts: component 1 was printed in 50 layers (Figure 6, Figure 7 and Figure 8), and component 2 was printed in 120 layers (Figure 9, Figure 10 and Figure 11).
The total cost for the manufacturing of components is around 42$, and the total printing time is around 17 hours, as can be shown in Table 2, Table 3, Table 4 and Table 5.

2.2.2. The Neural Headset Used to Control the Implemented and Simulated Systems

This system utilizes the functionality of the EmotivTM Insight headset, which is shown in Figure 12 [25]. Five semi-dry polymer sensors and two references electrodes are part of this five-channel EmotivTM Insight headset.
The headset's internal sampling rate is configured at 128 Hz for each individual channel. This cost-effective equipment is specifically designed to facilitate the use of BCI in experiments and research purposes. It ensures mobility and high accuracy by filtering brain signals and wirelessly transmitting data to the computer.
The mobile EEG headset has comprehensive brain-sensing capabilities and uses modern electronics to generate high-quality and robust signals. This headset can establish connections with computers, and mobile devices via Bluetooth or a 2.4 GHz wireless dongle to transmit the collected raw data to the Processing Unit. Additionally, the LiPo battery with a capacity of 450 mAh has been created to provide eight hours of operation.
To measure brain activity, it is necessary to position the headset’s electrodes on the scalp as shown in Figure 12.
Figure 13 shows the positioning of the electrodes of the Emotiv headset, including the reference electrodes: DRL (Driven Right Leg) and CMS (Common Mode Sense).
The headset captures the electrical signal generated by the neurons as an analogue signal. This signal is amplified, filtered from noise, and converted into a digital signal. The Sinc filter, located inside the headset, eliminates the noise from the signal [25]. The Sinc filter is derived from the mathematical notion of a low pass filter which has a predetermined cut-off frequency. Signal components below the cut-off frequency are not attenuated, while those beyond the cut-off frequency are suppressed [26].

2.2.3. Technology Used in Video Simulation of the System Operation

Unity is a versatile cross-platform game engine that supports the creation of both three-dimensional and two-dimensional games, as well as simulations. This game engine was first launched in 2005 only for OS X. Unity has been made accessible on a total of 27 platforms, including a wide range of devices like mainstream computer operating systems, major consoles, as well as virtual and mixed reality devices . Unity become a very prevalent graphics engine used in the development of virtual simulators of the reality environment .
In the proposed system, the Unity engine is used to simulate a house, where a person controls the lock/unlock of an entry door and open/close a light using the Emotiv EEG headset. Actions are triggered by user thoughts through Emotiv Cortex API (see Figure 14).

2.3. Methods

The neural headset must be trained by a user to record a thought (a command) for the required control action. It was requested of the participants that they carry out two spoken imagery tasks: turning a power light on and off and opening and closing a door. These tasks will be used later to control the corresponding action. More sessions of training will increase the accuracy of the system in controlling the door and light (see Figure 15). To ensure that all sensors detect high-quality signals, the headset must be adjusted before training (Figure 16).
The training approach included the use of the Emotiv BCI application, where each command was trained for 100 training iterations (8 seconds for each iteration). The total time of the training session is around 25 minutes.

2.3.1. System Implementation

The system comprises multiple components connected as follows: the Emotiv Insight neuro-headset establishes a Bluetooth connection with the computer, the computer establishes a Secure Shell Protocol (SSH) connection with the Raspberry Pi Zero 2 W, and the Raspberry Pi is connected to the Digital MG996 servo motor and 10 LED’s via the GPIO pins (see Figure 17).
The Raspberry Pi Zero 2 W [31] is a tiny powerful single-board computer developed by the Raspberry Pi Foundation. It contains a 1 GHz ARM Cortex-A53 quad-core processor, it offers solid performance in a compact package. With integrated Wi-Fi (802.11b/g/n) and Bluetooth 4.2 connectivity, it enables wireless communication for networking and peripheral connections. The hardware offers a performance of approximately 80% of that of the Raspberry Pi 3B and is five times faster than the original Raspberry Pi Zero. Its small size and versatile features including GPIO pins for hardware communication like with the servomotor and a microSD card slot for storage, make this model ideal for IoT applications and projects that require compact size and low cost.
The Digital MG996 servo motor [32] is a high-torque component with precise 90° movement, ideal for robotics and remote control applications. Its digital control interface ensures accuracy and reliability, while its robust construction and metal gears provide durability. Perfect for projects requiring precise angular control in a compact package. It is suitable for this system.
Many low-power amplifying and switching applications make use of the 2N2222 [33], a popular NPN bipolar junction transistor (BJT). It is good for our 10 LEDs that operate at 5V and would draw in a total of around 200mA.
The LED used in this system is L-7113GD-5V. It is a 5 mm green LED [34] with a water-clear lens, a wide viewing angle of approximately 60 degrees, a wavelength of 565 nm, a forward voltage of 5V, and a typical brightness of 15-30 millicandela. It is ideal for our application as it also has an internal resistor thus minimizing the amount of wires and other components used.
One of the 5 V GPIO pins (which one), a Ground pin and the GPIO 17 Chip Enable-CE1(SPI1) pin of the Raspberry Pi Zero 2 W are connected to the VCC pin, Ground pin and PWM pin of the servomotor. The GPIO 18 Chip Enable-CE1(SPI1) pin is connected through a 1kΩ resistor to the 2N2222 transistor and the emitter and the collector are connected to the ground of RPi, and the other 5V GPIO pin of the Pi as the VCC pin. We connect 10 of the 5V LEDs with internal resistors for lighting.

2.3.2. System Control

The filtered brain signal is sent from the headset to the computer by Bluetooth and will be converted into specific commands using CortexAPI to be run by the Raspberry Pi Zero 2 W. Cortex API represents an interface created by Emotiv to manage EEG data captured by the headset. The resulting commands are sent to Raspberry PI zero 2 W by SSH (Secure Shell Protocol) to execute two Python scripts. The Python script is running to open/close the door and turn the LED on or off depending on the commands received. For example, to open or close the door, the user thinks of the corresponding speech imagery word that he/she thought in the training sessions.
A live server is set up on the platform using Python and the FastAPI framework. As shown in Pseudocode 1, updating the Cortex API library is necessary to increase control over the headset. These modifications reduced the latency of the orders sent and improved the headset's connection. Additionally, these updates facilitate subscribing and receiving commands sent to a WebSocket address.
After transforming the signals to the server, there is a predefined brief delay to guarantee the successful transmission of the signal. The commands are processed and organised using a dictionary structure, in which each command corresponds to a certain action (Pseudocode 1). For instance, when the specific command is executed, it produces a JSON response. Once the answer is organised, it is efficiently sent using a separate WebSocket that is particularly created for the purpose of transporting instructions to apps and receiving replies from applications to the server.
Responses are received on the Commands WebSocket route at the application layer. After receiving a command, the application decodes it and assigns it to the appropriate action.
Pseudocode 1 Cortex API updates in Python
Preprints 111107 i001
A flutter application has been developed for real-life use and to check the door status. The last version of Flutter (3.19.5) is used to ensure maximum compatibility with both Android and iOS. This application listens to the WebSocket server running on the Raspberry Pi Zero 2 W IP which can be chosen using the settings of the app (default being “localhost”). When the door or the light status changes (e.g. opening the door), the server sends a signal to the flutter application WebSocket client. As can be seen in the Pesudocode 2 functions, the application can display what is currently happening, when the app receives a signal to open the door from the server, a “Door Opened” message is appeared on the screen along with changing the background of the app to green for better demonstration, and when the app receives the same signal again, the message “Door Closed” is appeared and the background colour is changed to red (see Figure 18). Similarly, when the server sends a signal to turn the light on, a “Light On” message appears on the screen and the background colour of the application is changed to yellow, when the same signal is received again, the message “Light Off” is displayed and the background colour of the application is changed to grey (see Figure 19). As the application is developed in Flutter, it can be run on all platforms including Android, iOS and Windows on which it has been tested as well and confirmed to be working properly.
Pseudocode 2 Functions developed in Flutter, to be run on the phone
Preprints 111107 i002

2.3.3. Demonstrative Real-Time Unity Simulation of Controlling Home Automation System Using BCI

To simulate a home automation system controlled by the user's mind, the Unity engine has been used to create a real-time video simulation of the residential environment. This simulation comprised a meticulously designed 3D model of a house inside the engine integrated with a specialised neural headset used to record the participant’s thoughts (see Figure 14). The simulation includes a three-dimensional representation of a door to achieve a high level of realism in the scenario. An essential component of the simulation is the use of the NativeWebSocket library, which facilitates the establishment of a connection between the simulation and the server (Pseudocode 3).
Pseudocode 3 WebSocketController class in C#
Preprints 111107 i003
The WebSocket protocol reduced latency in command execution. When the server sends a command, the system carries out the certain tasks that have been assigned to each command.
To enhance realism, the physical forces at specific points of the door were simulated, creating the action of a servo motor in practical applications. This included a detailed realism of opening and closing the door .
In addition to door mechanics, the simulation extended a lighting system in the house model. Also, the door light is simulated to be switched ON and OFF via commands using the headset. The aspect of the simulation using the Unity engine capabilities to propagate light from a source, illuminati the 3d space in a manner consistent with the real world (see Figure 20).

3. Results and Discussion

Twenty people participated in the study by having their EEG signals recorded. To control the video simulation and the implemented system, two commands on the EMOTIV Insight neural headset were, for each participant. Every participant was required to put on the dedicated neural headset and take a seat in front of the computer (for the simulation), and in front of the implemented hardware system (see Figure 18 and Figure 19).
Initially, each participant was instructed to activate the light by mentally focusing on the specific image/phrase that they had been trained to associate with it. Once the light was activated, the individual was instructed to deactivate the light. Similarly, the participant was asked to open and close the door. Each participant was instructed to attempt to manipulate the light and door on fifteen times for each order, and to keep track of the number of instructions that were successfully executed (the attempts that resulted in successful control of the light and door). To execute the required Python file, the user's PC sends a command through Bluetooth to the Raspberry Pi whenever they think about one of these instructions. This script includes the orders to transmit the appropriate signal to the light and door. All these changes are notified in real-time on a localhost server, which sends the notification to a Futter application developed to be used on the phone.
Table 6 displays the quantity of successfully executed commands for each participant during the trial. The examination of the results showed that a lack of concentration throughout the experiment could have contributed to some of the incorrect answers. Another possible explanation for the somewhat better success rate of male replies compared to fe-male responses is that the EEG signal quality was marginally worse in certain female individuals owing to their longer hair.
The overall average is 10.52, although the examination indicates that the overall precision of the system is 70.16%. The dispersion has been determined by calculating the standard deviation for each set of data, separately for each participant. The highest and lowest values obtained were 1.41 and 0, respectively. The experiment often has a low standard deviation, with an average standard deviation of 0.8131 for all results. Dispersion is seen in participants 4, 6, 13 and 14 as a result of variation in the relevance of the success attempts for the two trained commands.
Mental commands sent from the neural headset have successfully controlled the video simulation and the physical system in parallel, both versions being synchronised to give a response in a similar time as can be seen in Figure 18 and Figure 19.
This procedure was applied to the implemented hardware system.

4. Conclusions

In the realm of biomedicine, the applications of BCI applications that are based on EEG are the intended objective of present research. The objective of this study is to develop an implementation for the execution of a real-time home automation system that is controlled by brain signals. Patients with quadriplegia, locked-in syndrome, and other conditions that prevent them from walking or moving may utilize the device, as can healthy persons who want to live a more comfortable life. A further benefit of this work is that it paves the way for the control of a greater number of household appliances via the use of BCI. When compared to the performance of other studies that were provided in the literature study, the BCI system that was presented displays similar results, and the following observations can be made referred to this implemented system: contain a video simulation, contain a connection to a real-time server to send live notifications to the smartphones, and the phisycal system it was succesfully tested on twenty participants, with results mentioned into Table 6.
As a future work, the presented system can be upgrade to fulfill others tasks than the current ones, but for every task, a new neural command must be trained by the user who whear the headset.

Author Contributions

Conceptualization, M.-V.D.; methodology, I.N., A.F. and A.-M.T.; software, I.N., A.F. and A.-M.T.; validation, M.-V.D, I.N., A.F. and A.-M.T.; formal analysis, M.-V.D. and A.H.; investigation, M.-V.D, I.N. and A.F.; resources, M.-V.D., I.N. and A.F.; data curation, M.-V.D, I.N. and A.F.; writing—original draft preparation, M.-V.D.; writing—review and editing, M.-V.D., A.H., and C.-P.S.; visualization, M.-V.D., A.H., T.-G. D., C.-P.S. and A.-R. M.; supervision, M.-V.D., A.H., T.-G. D. and A.-R. M.; project administration, M.-V.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable for studies not involving humans or animals.

Data Availability Statement

Not applicable.

Acknowledgments

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. J. J. Vidal, “Toward direct brain-computer communication”, Annu. Rev. Biophys. Bioeng., vol. 2, no. 1, pp. 157–180, 1973. [CrossRef]
  2. Y. Kaplan, “Neurophysiological foundations and practical realizations of the brain–machine interfaces in the technology in neurological rehabilitation”, Hum. Physiol., vol. 42, pp. 103–110, 2016. [CrossRef]
  3. E. V Bobrova, V. V Reshetnikova, A. A. Frolov, and Y. P. Gerasimenko, “Use of imaginary lower limb movements to control brain–computer interface systems”, Neurosci. Behav. Physiol., vol. 50, pp. 585–592, 2020. [CrossRef]
  4. S. M. Engdahl, B. P. Christie, B. Kelly, A. Davis, C. A. Chestek, and D. H. Gates, “Surveying the interest of individuals with upper limb loss in novel prosthetic control techniques”, J. Neuroeng. Rehabil., vol. 12, no. 1, pp. 1–11, 2015. [CrossRef]
  5. T. Beyrouthy, S. K. Al Kork, J. A. Korbane, and A. Abdulmonem, “EEG mind controlled smart prosthetic arm”, in 2016 IEEE international conference on emerging technologies and innovative business practices for the transformation of societies (EmergiTech), 2016, pp. 404–409. [CrossRef]
  6. S. Madakam, V. Lake, V. Lake, and V. Lake, “Internet of Things (IoT): A literature review”, J. Comput. Commun., vol. 3, no. 05, p. 164, 2015. [CrossRef]
  7. Ö. Türk and M. S. Özerdem, “Epilepsy detection by using scalogram based convolutional neural network from EEG signals”, Brain Sci., vol. 9, no. 5, p. 115, 2019. [CrossRef]
  8. M.-V. Drăgoi, A. Hadăr, N. Goga, L. Grigore, A. Ștefan, and H. A. Ali, “Design and implementation of an EEG-based BCI prosthetic lower limb using Raspberry Pi 4”, UPB Bul. - Ser. C - Ing. Electr. şi Ştiinţa Calc. - Sci., vol. 3, p. 14, 2023, [Online]. Available: https://www.scientificbulletin.upb.ro/rev_docs_arhiva/full704_649223.pdf.
  9. M.-V. Drăgoi et al., “Contributions to the Dynamic Regime Behavior of a Bionic Leg Prosthesis”, Biomimetics, vol. 8, no. 5, p. 414, 2023. [CrossRef]
  10. H. A. Ali et al., “EEG-based Brain Computer Interface Prosthetic Hand using Raspberry Pi 4”, Int. J. Adv. Comput. Sci. Appl., vol. 12, no. 9, 2021.
  11. P. Boord, A. Craig, Y. Tran, and H. Nguyen, “Discrimination of left and right leg motor imagery for brain–computer interfaces”, Med. Biol. Eng. Comput., vol. 48, pp. 343–350, 2010. [CrossRef]
  12. L. D. Xu, “Internet of Things (IoT): An Introduction”, Wiley Encycl. Electr. Electron. Eng., pp. 1–10, 1999.
  13. X. Jia, Q. Feng, T. Fan, and Q. Lei, “RFID technology and its applications in Internet of Things (IoT)”, in 2012 2nd international conference on consumer electronics, communications and networks (CECNet), 2012, pp. 1282–1285.
  14. H. Landaluce, L. Arjona, A. Perallos, F. Falcone, I. Angulo, and F. Muralter, “A review of IoT sensing applications and challenges using RFID and wireless sensor networks”, Sensors, vol. 20, no. 9, p. 2495, 2020. [CrossRef]
  15. M. C. Domingo, “An overview of the Internet of Things for people with disabilities”, J. Netw. Comput. Appl., vol. 35, no. 2, pp. 584–596, 2012. [CrossRef]
  16. Q. Gao, X. Zhao, X. Yu, Y. Song, and Z. Wang, “Controlling of smart home system based on brain-computer interface”, Technol. Heal. Care, vol. 26, no. 5, pp. 769–783, 2018. [CrossRef]
  17. K. M. C. Babu and P. A. H. Vardhini, “Brain Computer Interface based Arduino Home Automation System for Physically Challenged”, in 2020 3rd International Conference on Intelligent Sustainable Systems (ICISS), 2020, pp. 125–130. [CrossRef]
  18. L. Lanka, D. K. Dhulipalla, S. B. Karumuru, N. S. Yarramaneni, and V. R. Dhulipalla, “Electroencephalography (EEG) based Home Automation for Physically Challenged People using Brain Computer Interface (BCI)”, in 2022 International Conference on Inventive Computation Technologies (ICICT), 2022, pp. 683–687. [CrossRef]
  19. E. Al-Masri, A. Singh, and A. Souri, “IoBCT: A Brain Computer Interface using EEG Signals for Controlling IoT Devices”, in 2022 IEEE 5th International Conference on Knowledge Innovation and Invention (ICKII), 2022, pp. 18–23. [CrossRef]
  20. D. Ahmed, V. Dillshad, A. S. Danish, F. Jahangir, H. Kashif, and T. Shahbaz, “Enhancing Home Automation through Brain-Computer Interface Technology”, J. Xi’an Shiyou Univ. Nat. Sci. Ed., vol. 19, no. 12, pp. 1–7, 2023, [Online]. Available: https://www.researchgate.net/profile/Abdul-Samad-Danish/publication/376272436_Enhancing_Home_Automation_through_Brain-Computer_Interface_Technology/links/65719ac56610947889a4a0ac/Enhancing-Home-Automation-through-Brain-Computer-Interface-Technology.pdf.
  21. H. A. Ali, L. A. Ali, A. Vasilateanu, N. Goga, and R. C. Popa, “Towards Design and Implementation of an EEG-Based BCI TV Remote Control”, Int. J. Online Biomed. Eng., vol. 19, no. 10, 2023. [CrossRef]
  22. S. Farah, D. G. Anderson, and R. Langer, “Physical and mechanical properties of PLA, and their functions in widespread applications—A comprehensive review”, Adv. Drug Deliv. Rev., vol. 107, pp. 367–392, 2016. [CrossRef]
  23. N. Lokesh, B. A. Praveena, J. S. Reddy, V. K. Vasu, and S. Vijaykumar, “Evaluation on effect of printing process parameter through Taguchi approach on mechanical properties of 3D printed PLA specimens using FDM at constant printing temperature”, Mater. today Proc., vol. 52, pp. 1288–1293, 2022. [CrossRef]
  24. “Creality Ender 3 S1 PRO 3D printer”. https://www.creality.com/products/creality-ender-3-s1-pro-fdm-3d-printer (accessed Mar. 14, 2024).
  25. EMOTIV, “EMOTIV Insight Technical Specifications”, 2019. https://emotiv.gitbook.io/insight-manual/introduction/technical-specifications.
  26. J. L. Bohorquez, M. Yip, A. P. Chandrakasan, and J. L. Dawson, “A biomedical sensor interface with a sinc filter and interference cancellation”, IEEE J. Solid-State Circuits, vol. 46, no. 4, pp. 746–756, 2011. [CrossRef]
  27. “Unity User Manual 2022.3”. https://docs.unity3d.com/Manual/ (accessed Mar. 18, 2024).
  28. F. Bosmos, A. T. Tzallas, M. G. Tsipouras, E. Glavas, and N. Giannakeas, “Virtual and Augmented Experience in Virtual Learning Tours,” Information, vol. 14, no. 5, p. 294, 2023. [CrossRef]
  29. M. Vukić, B. Grgić, D. Dinčir, L. Kostelac, and I. Marković, “Unity based urban environment simulation for autonomous vehicle stereo vision evaluation”, in 2019 42nd International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2019, pp. 949–954. [CrossRef]
  30. F. Li et al., “Lightweight soft robotic glove with whole-hand finger motion tracking for hand rehabilitation in virtual reality”, Biomimetics, vol. 8, no. 5, p. 425, 2023. [CrossRef]
  31. R. P. (Trading) Ltd, “Raspberry Pi Zero 2 W Datasheet”, 2021. https://datasheets.raspberrypi.com/rpizero2/raspberry-pi-zero-2-w-product-brief.pdf.
  32. “MG996R Datasheet (PDF) - List of Unclassifed Manufacturers”. https://pdf1.alldatasheet.com/datasheet-pdf/view/1131873/ETC2/MG996R.html.
  33. “NPN Silicon Epitaxial Planar Transistor”, 2016. https://datasheetspdf.com/pdf-down/2/N/2/2N2222-SEMTECH.pdf.
  34. “L-7113GD-5V”, 2015. https://www.kingbright.com/attachments/file/psearch/000/00/20160808bak/L-7113GD-5V(Ver.9B).pdf.
  35. “Fritzing”. https://fritzing.org/.
Figure 1. Real-time home automation system and video simulation: 1. EMOTIV Insight neural headset; 2. Cortex API and FastAPI server are running; 3. The simulated system (Unity Engine); 4. Raspberry Pi Zero 2 W; 5. BreadBoard with 5V LEDs; 6. PLA Door and frame, equipped with Digital MG996 servo motor.
Figure 1. Real-time home automation system and video simulation: 1. EMOTIV Insight neural headset; 2. Cortex API and FastAPI server are running; 3. The simulated system (Unity Engine); 4. Raspberry Pi Zero 2 W; 5. BreadBoard with 5V LEDs; 6. PLA Door and frame, equipped with Digital MG996 servo motor.
Preprints 111107 g001
Figure 2. Creality Ender 3 S1 PRO 3D printer [24].
Figure 2. Creality Ender 3 S1 PRO 3D printer [24].
Preprints 111107 g002
Figure 3. Door model 10% - layer 42: (a) software; (b) 3D printer.
Figure 3. Door model 10% - layer 42: (a) software; (b) 3D printer.
Preprints 111107 g003
Figure 4. Door model 60% - layer 225: (a) software; (b) 3D printer.
Figure 4. Door model 60% - layer 225: (a) software; (b) 3D printer.
Preprints 111107 g004
Figure 5. Door model 100% - layer 425: (a) software; (b) 3D printer.
Figure 5. Door model 100% - layer 425: (a) software; (b) 3D printer.
Preprints 111107 g005
Figure 6. Frame model - first component 10% - layer 5: (a) software; (b) 3D printer.
Figure 6. Frame model - first component 10% - layer 5: (a) software; (b) 3D printer.
Preprints 111107 g006
Figure 7. Frame model - first component 60% - layer 30: (a) software; (b) 3D printer.
Figure 7. Frame model - first component 60% - layer 30: (a) software; (b) 3D printer.
Preprints 111107 g007
Figure 8. Frame model - first component 100% - layer 50: (a) software; (b) 3D printer.
Figure 8. Frame model - first component 100% - layer 50: (a) software; (b) 3D printer.
Preprints 111107 g008
Figure 9. Frame model - first component 10% - layer 12: (a) software; (b) 3D printer.
Figure 9. Frame model - first component 10% - layer 12: (a) software; (b) 3D printer.
Preprints 111107 g009
Figure 10. Frame model - first component 60% - layer 72: (a) software; (b) 3D printer.
Figure 10. Frame model - first component 60% - layer 72: (a) software; (b) 3D printer.
Preprints 111107 g010
Figure 11. Frame model - first component 100% - layer 120: (a) software; (b) 3D printer.
Figure 11. Frame model - first component 100% - layer 120: (a) software; (b) 3D printer.
Preprints 111107 g011
Figure 12. Emotiv™ Insight neuro-headset: (a) Semi-dry polymer sensors of the headset; (b) the headset on a user head.
Figure 12. Emotiv™ Insight neuro-headset: (a) Semi-dry polymer sensors of the headset; (b) the headset on a user head.
Preprints 111107 g012
Figure 13. Locations of sensors locations in Emotiv Insight headset [25].
Figure 13. Locations of sensors locations in Emotiv Insight headset [25].
Preprints 111107 g013
Figure 14. Real-time simulation in Unity using Emotiv Insight: (a) simulation of the home view; (b) the use of the neuro-headset to control the video simulation.
Figure 14. Real-time simulation in Unity using Emotiv Insight: (a) simulation of the home view; (b) the use of the neuro-headset to control the video simulation.
Preprints 111107 g014
Figure 15. Training session.
Figure 15. Training session.
Preprints 111107 g015
Figure 16. EMOTIV Insight sensors quality: (a) bad quality; (b) good quality.
Figure 16. EMOTIV Insight sensors quality: (a) bad quality; (b) good quality.
Preprints 111107 g016
Figure 17. Circuit Diagram made with Fritzing [35].
Figure 17. Circuit Diagram made with Fritzing [35].
Preprints 111107 g017
Figure 18. Lock and unlock the door using brain signals, with real-time notifications on video simulation and the phone: a. Door unlocked; b. Door locked.
Figure 18. Lock and unlock the door using brain signals, with real-time notifications on video simulation and the phone: a. Door unlocked; b. Door locked.
Preprints 111107 g018
Figure 19. Changing LED ON/OFF using brain signals, with real-time notifications on video simulation and the phone: a. LED ON; b. LED OFF.
Figure 19. Changing LED ON/OFF using brain signals, with real-time notifications on video simulation and the phone: a. LED ON; b. LED OFF.
Preprints 111107 g019
Figure 20. Home automation system simulated with Unity Engine.
Figure 20. Home automation system simulated with Unity Engine.
Preprints 111107 g020
Table 1. Printing parameters.
Table 1. Printing parameters.
Field Value
Layer height 0.2 mm
Layers on contour 2.5
Filling density 5%
The type of filling Straight
Printing plate temperature 60 °C
Printing head temperature 210 °C
Table 2. Time, quantity, and estimated price for the door 3D printing.
Table 2. Time, quantity, and estimated price for the door 3D printing.
Description Value
3D printing time 8 hours, 15 minutes, and 50 seconds
Amount of material used [g] 81.9
Estimated price [$] 19.49
Table 3. Time, quantity, and estimated price for the frame 3D printing - the first component.
Table 3. Time, quantity, and estimated price for the frame 3D printing - the first component.
Description Value
3D printing time 2 hours, 52 minutes, and 42 seconds
Amount of material used [g] 25.9
Estimated price [$] 6.16
Table 4. Time, quantity, and estimated price for the frame 3D printing – the second component.
Table 4. Time, quantity, and estimated price for the frame 3D printing – the second component.
Description Value
3D printing time 5 hours, and 29 minutes
Amount of material used [g] 52.6
Estimated price [$] 12.53
Table 5. Total time, quantity, and estimated price for the door and frame 3D printing.
Table 5. Total time, quantity, and estimated price for the door and frame 3D printing.
Description Value
3D printing time 16 hours, 37 minutes, and 32 seconds
Amount of material used [g] 175.8
Estimated price [$] 41.84
Table 6. Testing results.
Table 6. Testing results.
Participants Light On/Off Door Locked/Unlocked AVG STDEV
1 11 12 11.5 0.7071
2 9 10 9.5 0.7071
3 8 7 7.5 0.7071
4 12 10 11 1.4142
5 9 10 9.5 0.7071
6 11 9 10 1.4142
7 10 11 10.5 0.7071
8 12 13 12.5 0.7071
9 13 14 13.5 0.7071
10 11 10 10.5 0.7071
11 12 12 12 0
12 7 8 7.5 0.7071
13 9 7 8 1.4142
14 11 9 10 1.4142
15 9 10 9.5 0.7071
16 13 12 12.5 0.7071
17 11 10 10.5 0.7071
18 10 11 10.5 0.7071
19 12 11 11.5 0.7071
20 12 13 12.5 0.7071
Total AVG 10.525
STDEV AVG 0.8131
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated