Preprint
Article

A Novel Inexpensive Camera-based Photoelectric Barrier System for Accurate Flying Sprint Time Measurement

Altmetrics

Downloads

186

Views

38

Comments

0

A peer-reviewed article of this preprint also exists.

Submitted:

06 July 2023

Posted:

13 July 2023

Read the latest preprint version here

Alerts
Abstract
Electronic photoelectric barriers are established devices to time subjects in experiments or athletes in sports. The systems are expensive but also dependable and precise. We propose a novel, affordable photoelectric barrier system based on consumer grade camera hardware and show how to build such a system with common electronic components and a smartphone. We demonstrate in two experiments with track and field athletes that our novel system has accuracy comparable to a professional photoelectric barrier system, but for a fraction of the costs.
Keywords: 
Subject: Computer Science and Mathematics  -   Other

1. Introduction

With new technology being released every year, digitalization in sports is becoming increasingly significant. Performance evaluation and optimization in professional sports have a long history, but they are limited by the cost of the equipment and the need for trained professionals to operate it. A labor-intensive manual evaluation is frequently needed, too. As a result, only professional athletes can use it. In recent decades, electronics have become much more affordable and powerful. Most people now own a smartphone or tablet, which is essentially a portable computer with a display, a general processing unit, and a number of sensors. They can be used to operate devices like GPS sports watches to monitor heart rate, speed, and route during a workout. Even for hobbyists, they are simple to use and reasonably priced. The accelerometer in these watches or smartphones can track the number of steps taken each day and give the user feedback on their current and previous levels of activity. These are just a few examples that transfer experience and knowledge of sports professionals to the general public. The options are still very limited for small sports clubs and physical education in schools. Stopwatches are still frequently used but lack accuracy in many situations, such as for sprinting over short distances. Consequently, these groups would benefit from having access to fully automatic time measurement in order to precisely track the performance and development of athletes or students.
By offering a low-cost, simple-to-use camera-based photoelectric barrier system that trainers and teachers can use for the performance evaluation of sprints and runs, we hope to add such an option. Our systems work with almost all current Android smartphones and tablets and only make use of readily available electronic components. By conducting comparison experiments with an expert photoelectric barrier system, we assess the precision of our system. Additionally, we make our software and building plans open source and freely available.

2. Previous Work

2.1. Commercial Photoelectric Barriers

Infrared or laser beams with a reflector placed on the opposite side of the sensor are used in professional photoelectric barrier systems. The detection area between sender and reflector is called gate and ranges from a few centimeters, primarily for industrial applications, to several meters. The barrier is triggered if an object obstructs the light beam. The majority of commercial systems wire their photoelectric barriers to the system’s controlling device, such as a laptop or a specialized device that comes with it, with the option of adding additional radio transmitting hardware as an extension. The systems can be constructed as dual photoelectric barriers with sensors spaced a few centimeters apart. They are only activated when both sensors identify an obstruction. This ensures that an arm or leg do not trigger the barrier before the athlete’s upper body has passed it. Since we don’t need reflectors and all of our devices use radio communication by default, which completely eliminates the need for wires, our system is much simpler to set up. We emulate dual photoelectric barriers in software to ensure reliable triggering similar to the professional devices.

2.2. Image-based Change Detection

Image change detection is a well-studied topic with applications in a variety of industries, including video surveillance, medical diagnosis, monitoring civil infrastructure, driver assistance systems, and remote sensing, which focuses primarily on the analysis of satellite images. The task is to identify areas of change within two images of the same scene that were taken at different times. Over the past few decades, a wide range of methodologies have been proposed for use in various contexts where automatic image evaluation is advantageous or necessary. Radke et al. [1] compiled a thorough review of these methods. We use the simplest and most computationally efficient type, the simple differencing approach, and demonstrate how to address the limitations because the advanced approaches are computationally demanding and cannot be used on our targeted hardware.

2.3. Existing Camera-based Systems

For their studies, locomotion researchers employ tools like force plates, infrared sensor systems, and body-attached accelerometers in addition to specialized evaluation software. Such equipment, however, is only accessible to laboratories with ample funding. Contemporary cameras are inexpensive but still capable of producing high-quality photos and videos. Since every smartphone has at least one camera, they are widely accessible. Because of this, the research community has worked hard to replace expensive equipment with equally reliable video-based analysis techniques. For instance, frame-by-frame playback, video comparison, as well as manual and automatic annotations, are all features of the open source program Kinovea [2]. A high FPS video camera that can capture more than 100 images per second is usually required for the majority of analysis techniques. Lower capture rates frequently have slower shutter speeds and lower resolution in time, which causes strong motion blur when capturing fast movements. If the necessary features cannot be observed clearly, analysis and assessment become challenging or impossible. Additionally, difficult lighting situations, like low light settings, can significantly lower the quality of videos. Despite these drawbacks, camera-based methods have already demonstrated their ability to replace expensive IR-Systems or force plates. Balsalobre-Fernandez et al. [3] used a consumer grade high speed camera to capture and assess the performance of counter movement jumps. They recorded jumps at 240 FPS and used Kinovea to evaluate the measurement data. They achieved results on par with professional IR-based system, although with the drawback of manual annotation of the videos, which took about 30 seconds for each clip.
Trainers and researchers studying locomotion now use smartphones as an important resource. The camera enables on-site video recording and real-time analysis. There are numerous Apps available for automated or semi-automated performance measurement, motion tracking, and analysis. For a summary of these Apps, we would like to direct the reader to Busca et al. [4]. It should be noted that the majority of them are only available for Apple devices. The fact that these smartphones have much more capable camera hardware than the majority of Android devices could be the cause. While the majority of low-cost and middle-class mobile devices can only record videos at a frame rate of 30, many iPhone models offer capture rates of 120 or even 240 FPS. Only expensive, high-end Android devices come with comparable features. Because of their blurry images or poor time resolution, the aforementioned Apps are therefore not very useful to the majority of Android users.
In order to precisely measure an athlete’s finishing time in competitive sports like track and field, horse racing, canoeing, or cycling, photo finish systems have been used for a long time. At the finish line, a strip photo is taken to capture a two-dimensional image with the finish line in one dimension and time in the other. This enables to identify the precise moment the athlete crosses the finish line. The specific attribute of the athlete that determines whether the finish line has been crossed heavily depends on the kind of sport. Automatic evaluation is challenging because of this. For instance, in track and field, the athlete’s shoulder needs to pass the finish line. Image change detection techniques were used by Zaho et al. [5] to remove the background and segment the athlete into various regions. Then, they were able to estimate the times of track and field athletes’ finishes with a precision of 4 ms and an accuracy of 86.3 percent. The method was improved by Li et al. [6], who reported a 2 ms accuracy, but with a slight decrease in correctness. To provide a time resolution of 1 ms and sharp images, strip photographs require high FPS line scan cameras that take at least 1000 images per second. Such devices cannot be regarded as inexpensive because they already cost several thousand euros. Strip photography is now possible on mobile devices thanks to the Apps such as SprintTimer [7], but it requires an Apple Smartphones with high speed camera feature to create strip photos of sufficient quality. A device lacking this feature produces blurry images for swift movements, making it impossible to distinguish the required features in the resulting photograph. The advantage of this method over our system and photoelectric barriers in general is that multiple athletes can be measured simultaneously.
The Android app Photo Finish [8] is one that comes the closest to our method. It uses an algorithm to recognize an athlete who is passing by. To precisely determine when the capture line was crossed, the app detects the athlete’s chest line. Several smartphones, WiFi or mobile internet, and a payed pro version with monthly subscription fee is necessary to measure time using multiple devices, as is required for a flying sprint. Due to the fact that it is a closed source commercial application, we know very little about how it functions specifically. The software, according to the product page, captures 30 full-screen images per second and recognizes the chest in each of them. The precise time is then extrapolated from the two images taken just before and just after the finish line. The developers claim that the accuracy is 1 ms, but this is neither verified nor likely, at least not in all circumstances. In contrast, our system is self-contained and has no dependencies regarding wireless communication, since it uses Bluetooth for communication only. We also conduct extensive experiments to assess the system’s accuracy. Furthermore, our method’s accuracy is independent of the smartphone being used.

3. The Novel Photoelectric Virtual Barrier System

3.1. Virtual Image-based Photoelectric Barriers

This section will first describe our image-based photoelectric barrier technique for motion detection in front of a camera, followed by a description of its practical implementation. Since microprocessors are the hardware we are targeting, we must employ a method that is computationally efficient and affordable to ensure a constant, high FPS and maximum precision. As a result, we will base our strategy on simple differencing and use the mean squared error (MSE) metric to compute the color channel differences between two subsequent images. Due to the memory bandwidth and processing power restrictions of the targeted hardware, more sophisticated techniques are not practical. This would go against the need to complete the calculations in a split second. Since we only need to detect any motion in front of the device and do not need to detect a specific object, this approach is viable. We refer to these as virtual photoelectric barriers because they behave in a manner that resembles that of conventional photoelectric barriers. Although a camera’s image covers a large area, we only want to detect motion within a narrow band of the image, therefore we define just a few columns in the middle of the image as the detection area. The region we take into consideration is indicated by the black bar in the leftmost image in Figure 1. Using this construction, our gate consists of the whole vertical and just a small horizontal area in front of the camera. Our method can be used on any device with image processing capabilities, despite being primarily developed for microprocessors. Hence, it can be also implemented on a smartphone or computer.

3.1.1. Mathematical Description

We consider a tuple of images { I t , I t + 1 } where t Z + describes the point in time the image was taken. The image maps pixel coordinates x R 2 to pixel values I ( x ) R k with k = 1 for grayscale images, and k = 3 for RGB colored ones. We denote the region of the image I in which we want to detect motion as Ω I , where Ω contains n pixels. Then we compute the image-to-image deviation of two consecutive images using the MSE metric according to Equation (1), where the superscripts r , g , b denote the red, green, and blue channels respectively. Similarly the image-to-image deviation for grayscale images can be computed by just considering only one channel. We use the MSE metric because it is invariant to the size of Ω and sensitive to outliers, which increases the overall sensitivity of the virtual photoelectric barrier.
M S E ( I t , I t + 1 ) = 1 n y Ω ( I t r ( y ) I t + 1 r ( y ) ) 2 + ( I t g ( y ) I t + 1 g ( y ) ) 2 + ( I t b ( y ) I t + 1 b ( y ) ) 2
We define the decision function (2) to decide for two consecutive images, whether the difference in the images is sufficiently large to observe motion. The threshold θ R is a user-defined parameter that determines the sensitivity of the light barrier, and τ is the noise level. Both values can be adjusted to increase the reliability of the system in difficult situations. How to estimate reliable parameters is covered in the next subsection.
B ( I t , I t + 1 ) = 1 , M S E ( I t , I t + 1 ) > θ + τ 0 , otherwise
Equation (1) can be evaluated by a microprocessor several times per second for a reasonable size of Ω . The metric is quite simple but feasible for our setting, since we only have to make a binary decision (movement yes or no).

3.1.2. Reliability Consideration

Because a camera is a passive sensor, we must contend with a number of external factors that affect the accuracy of detection. Because of the simple error metric we use, the technique is susceptible to false positives that cannot be identified algorithmically. Such false positives can be caused by unintentional camera movement due to wind or ground shaking, internal camera noise, difficult lighting conditions, or an unstable background. To ensure accurate detection under the aforementioned conditions, an adaptation of t h e t a and computation of t a u are required.
Using a heavy, sturdy tripod and wind-resistant casings can reduce unintentional camera movement. Camera noise depends highly on the used sensor as well as the lighting conditions. Smaller sensors usually exhibit higher noise levels than larger ones. Additionally, noise is amplified in low light conditions because the signal needs to be boosted. We estimate the noise value τ in a still scene by measuring the MSE for a certain amount of time and taking the maximum value. This method enables us to include both internal sensor noise and a flustered background, such as the moving leaves of a bush, tree, or hedge, in the decision function. Another issue can be suddenly changing lighting conditions, such as when a room’s lights are turned on or off or when the sun suddenly appears or vanishes because of swiftly moving clouds. These circumstances will cause the barrier to be erroneously triggered. Pre-processing steps like intensity normalization, homomorphic filtering, or illumination modeling, as described by Radke et al. [1], can be used to solve this issue. This requires analysis of the entire image to provide reliable results. We did not look into these possibilities because they would be too difficult for a microcontroller to handle. The color of the object is the final issue with the accuracy of the detection. Only when the background and the moving object to be detected are sufficiently distinct can a high MSE be achieved. Because grayscale images are much more prone to errors, we advise using color images whenever possible. For instance, the detection may not work if an athlete wearing a dark shirt moves in front of a dark background. The color of the athlete’s clothing is much less likely to match the background than the brightness.

3.1.3. Multi-Barriers

To emulate the multi-barrier design of professional systems, we define several regions Ω 1 , , Ω m for which the MSE is calculated. Then, the barrier is triggered only if the decision function (2) evaluates to 1 for all Ω regions. Since the MSE is invariant to the number of pixels in the region, we can use the same θ for all regions. The noise value τ should be computed independently. The rightmost image in the Figure 1 shows an example of a multi-barrier scenario created with our system.

3.2. Mobile Image-based Photoelectric Virtual Barrier System

3.2.1. Hardware

We implement the image-based photoelectric barriers in a portable, useful, and economical system using the ESP32-Cam module [9]. A camera, microprocessor, and communication device are all included in this device. We developed an app for Android to configure, control and display the results. The ESP32-Cam and the necessary additional electronics and power source are placed in a simple casing and mounted on a tripod. Since Bluetooth is already natively supported by all devices, we don’t need any additional modules for wireless communication. The components needed for a single photoelectric barrier are listed in Table 1, along with their retail costs, including VAT and shipping (Germany, 2023). We can construct a fully functional photoelectric barrier for about 38 euros in materials, which is already less than the cost of a single infrared photoelectric barrier sensor.
The ESP32-Cam module, a breakout board with a 240 MHz ESP32 dual core System-on-a-Chip (SoC), a camera connector, and wireless connectivity like Bluetooth and WiFi, is at the heart of our system. As a result, it offers all the crucial features we need in a single, extremely affordable module. The majority of retailers include an OV2460 camera module, which we also use. The SoC includes 512 kB Ram and 4 MB flash memory, which is quite small for image processing and was the main bottleneck we had to consider designing the system. We added power supply and a status LED. Figure 2 depicts the wiring schematics.

3.2.2. Communication and Clock Synchronization

Reliable device communication is a crucial component of our system. The best option for our needs is Bluetooth because it works with all devices and has a sufficient range. We had no problems with line-of-sight distances of up to 50 meters in an outdoor environment and with the default configuration of all devices. If the smartphone is positioned exactly in the center of the two sensors, this allows us to track distances of at least 100 meters. A major issue is the variable latency of the Bluetooth connection, which ranges from a few to several hundred milliseconds and is extremely volatile. In order to obtain precise timings, the clocks must be synchronized. We decided to implement an online synchronization at setup time instead of a wired one.
From the smartphone, we periodically transmit "ping" messages to the photoelectric barrier device. The message initially contains the internal clock of the smartphone. The photoelectric barrier adds its own internal clock and sends the message back. Such a package gives us both the internal clock values of the two devices and the package’s latency. The distinctions between forward and backward latency are the only things that are uncertain. Repeating this procedure allows us to average out the uncertainty and obtain a precise clock synchronization. Then, in order to have time information in the same domain, we compute the clock offset value between the smartphone and the device clock. We perform this step every time the smartphone connects to the device. This makes the system easy to use, as there is no need for the user to perform a special synchronization operation.

3.2.3. Android Control App

Since Android is the most popular mobile operating system and almost all smartphones and tablets meet the requirements for our system, we created an Android app to control and configure the photoelectric barrier devices. Young athletes and novice users alike can operate the system thanks to its simple and practical design. One button is all that is needed to start and stop the measurement once the barriers are connected and configured. Figure 2 shows screenshots of the App. The current camera image can be viewed in the configuration tab to align the virtual measurement line correctly with the real one. Multi-barriers can also be configured here.

3.2.4. Power Supply and Operation Time

The ESP32 microcontroller is a power hungry device, especially with activated radio. With activated Bluetooth connection and video capture we measured about 300 mA at 3.3 V . So we can assume an average power consumption of one watt per hour. To power our mobile device, we use a rechargeable 9V lithium-ion battery pack with a total capacity of 600 mAh according to the manufacturer’s datasheet. Assuming two battery cells in series with a nominal cell voltage of 3.6 volts each, we have 2 · 3.6 V · 0.6 A h = 4.32 W h . This gives at least 4 hours of operation before the battery needs to be changed or recharged. In order to power the ESP32-Cam, we need a stable power supply of 3.3 V . We use a HW-411 (LM2596) DC-DC buck converter to convert the volatile battery voltage to the required 3.3 V .
Although the ESP32-Cam module has an internal voltage transformer and could be powered directly from the battery, it is much more efficient to use a separate transformer module. When powering the module this way, we measured 300 m A regardless of the voltage. We assume the device simply uses a Zener diode to cap the voltage to 3.3V Assuming the nominal Li-ion cell voltage, the power consumption would rise to 2 · 3.6 V · 0.3 A = 2.16 W , and the theoretical operating time would be more than halved. Actual runtime will be even lower because the battery cannot reliably provide that much power until it is completely drained. Another issue is the massive amount of power that is converted to heat, which may require active cooling of the internal voltage regulator. Otherwise the unit could overheat and be destroyed. We use only passive cooling and have never experienced overheating problems with our design.

3.2.5. Reliability

To address the various reliability concerns, we measured the sensor noise and obtained an MSE of 20 to 30 depending on the lighting conditions. By default, we use a threshold of θ = 400 , which works reliably in most situations. Therefore, the sensor noise has only a marginal effect on the result and can be ignored in most cases. Our simple housing design and lightweight tripod are both susceptible to wind, but we only experienced problems with strong gusts. Vibrations of the floor, e.g. in a gym, did not affect the reliability of the system.

3.2.6. Online Repository

We make the schematic, microcontroller program, and Android app publicly available under the MIT license so that anyone can build and use the proposed system. The required data will be published online on GitHub 1. Figure 3 shows our implementation of one camera-based photoelectric barrier, as used in the experiments in the next section.

4. Experiments

In order to determine the accuracy of our novel system, we conducted experiments timing 25-meter flight runs using a professional TAG Heuer® dual photoelectric barrier system (THS) and our camera-based system (CBS). Reliability is evaluated using Bland-Altman plots [10].
In our experiment setup, we positioned one THS photoelectric sensor at a line on a track and field stadium’s round course and the second one at a distance of 25 meters from the first one on a different line. The gate distance was about 1.5 meter and the height of the sensors was adjusted so that they would activate if the athlete’s chest crossed the measurement line. Since it is impossible to measure at the exact same height because the THS sensors would be blocked, we positioned our CBS sensors below those of the THS. We had to point the sensors upwards due to the lower vantage point in order to detect the athlete’s chest as well, and we matched the measurement lines as closely as possible similar to what is shown in the middle image of Figure 1. This is not a perfect configuration because the THS is reliably triggered by the athlete’s chest while our CBS can be triggered by a limb. The athletes started one meter before the first measurement line and we recorded the reported timings of both systems for each trial.
We performed two experiments with the described setup on two different days with experienced track and field athletes. There were 14 children in the first group, ranging in age from ten to twelve. Each participant underwent six trials, with the first two runs to be completed at a low speed, the second two at a medium speed, and the final two at a high speed. In the second experiment, five adult track and field athletes participated, and each of them completed nine trials. The first three with slow speed, the second three with medium, and the last three with high speed.
For each experiment, we created a Bland-Altman plot to compare our CBS to the THS, which we consider to be the gold standard for our test scenario. The plots are shown in Figure 4. For the first experiment, 79 out of 84 measurements (94.05%) are within the 95% limits of agreement, and for the second experiment, 42 out of 45 measurement points (93.33%) are within the 95% limits of agreement. These results are very close to the expected 95% that are expected to be within the limits of agreement, and we consider our results as strong evidence that our novel system performs reliably close to the professional system. Considering the theoretical time resolution of the CBS of 40 ms, as the microprocessor can only evaluate up to 25 frames per second, and the fast reaction time of the THS of 0.5 ms for the photocell HL-2-31, we achieved very accurate timing for flying runs.

5. Conclusion & Future Work

We explained how to implement a novel time-measuring system based on consumer-grade cameras that functions in a manner similar to photoelectric barrier sensors using inexpensive consumer-grade electronic components and any Android smartphone. Then, in order to assess its precision in a flying run scenario, we compared it to a professional photoelectric barrier system. According to the results, our system has comparable accuracy and provides accurate measurements.
Samples from a specific test can be recorded at regular intervals to track an athlete’s performance development. In track and field, American football, and soccer, flying runs of up to 30 meters are frequently used to assess maximum speed and acceleration. The typical setup consists of two photoelectric barrier placed apart in a certain distance, as we did in our experiments. Since the retail cost of a wireless system with features similar to our system is at least 2,000 euros, it is typically only accessible to larger institutions and sports clubs. Our system, however, can be constructed using materials for only 80 euros, making it a cost-effective alternative to professional systems for smaller sports clubs and schools that cannot afford to purchase a professional system. it can be deployed more quickly and with less difficulty, since our system doesn’t need reflectors like commercial systems do. Due to a hardware restriction of 25 frames per second, our system has a maximum accuracy of only 40 ms, whereas commercial photoelectric systems have an accuracy of 1 ms. But such a high precision is only required for competitions and seldom necessary in mass sports or physical education in schools.
In the future, it would be interesting to apply our method directly to smartphones and evaluate the precision. Although the authors’ initial testing indicates that smartphones are not intended for such tasks and exhibit significant user-level limitations in the simultaneous capture and processing of camera images. The results probably vary from model to model because Android smartphones have heterogeneous hardware. In this work, we used a particular hardware device (ESP32-Cam) that does not exhibit this issue.

Author Contributions

Conceptualization, Tom Uhlmann and Guido Brunnett; Data curation, Tom Uhlmann, Sabrina Bräuer and Falk Zaumseil; Formal analysis, Tom Uhlmann; Funding acquisition, Guido Brunnett; Investigation, Tom Uhlmann; Methodology, Tom Uhlmann and Sabrina Bräuer; Project administration, Guido Brunnett; Software, Tom Uhlmann; Writing – original draft, Tom Uhlmann; Writing – review & editing, Falk Zaumseil and Guido Brunnett.

Funding

This research was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 416228727 - CRC 1410.

Institutional Review Board Statement

Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Ethics Committee of the Faculty of Behavioural and Social Sciences at Chemnitz University of Technology (Aktenzeichen #101502078, 21. September 2021).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Software and schematics necessary to reproduce our proposed system are available under MIT license on Github: https://github.com/Tachikoma87/CBPhotoelectricBarrier. The data presented in this study are available on request from the corresponding author.

Acknowledgments

We would like to thank all the volunteers who participated in this study.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
1

References

  1. Radke, R.J.; Andra, S.; Al-Kofahi, O.; Roysam, B. Image change detection algorithms: a systematic survey. IEEE transactions on image processing 2005, 14, 294–307. [Google Scholar] [CrossRef] [PubMed]
  2. Community, K. Kinovea. https://www.kinovea.org/.
  3. Balsalobre-Fernández, C.; Tejero-González, C.M.; del Campo-Vecino, J.; Bavaresco, N. The concurrent validity and reliability of a low-cost, high-speed camera-based method for measuring the flight time of vertical jumps. The Journal of Strength & Conditioning Research 2014, 28, 528–533. [Google Scholar] [CrossRef]
  4. Quintana, M.; Padullés, J.M.; others. High-speed cameras in sport and exercise: Practical applications in sports training and performance analysis. Aloma: revista de psicologia, ciències de l’educació i de l’esport Blanquerna 2016, 34, 11–24. [Google Scholar] [CrossRef]
  5. Zhao, M.; Nie, Y.; Li, J.; Shuang, F.; Xie, Z.; Feng, Z. An automatic timing method for photo finish. 2013 8th International Conference on Computer Science & Education. IEEE, 2013, pp. 902–906.
  6. Li, J.; Nie, Y.; Zhao, M.; Shuang, F.; Zhu, B.; Xie, Z.; Feng, Z. A high accuracy automatic timing method for photo finish systems. 2014 IEEE International Conference on Progress in Informatics and Computing. IEEE, 2014, pp. 195–199.
  7. Kaiser, S. SprintTimer App. https://appmaker.se/home/sprinttimer/.
  8. Voig, J.L.; Voigt, A. Photo Finish App. https://photofinish-app.com/.
  9. Ai-Thinker. ESP32-Cam Module from AI-thinker. http://www.ai-thinker.com/pro_view-24.html.
  10. Bland, J.M.; Altman, D. Statistical methods for assessing agreement between two methods of clinical measurement. The lancet 1986, 327, 307–310. [Google Scholar] [CrossRef]
Figure 1. Camera footage with the detection area shown in black and the white areas marking regions, where the image-to-image error is actually calculated. Left: Full virtual barrier. Middle: Default virtual barrier in white. Right: Multi-barrier setting.
Figure 1. Camera footage with the detection area shown in black and the white areas marking regions, where the image-to-image error is actually calculated. Left: Full virtual barrier. Middle: Default virtual barrier in white. Right: Multi-barrier setting.
Preprints 78781 g001

Figure 2. The left image of this figure shows the schematics for one photoelectric barrier. The RGB diode is used to show the status of the device and the voltage divider is used to measure the battery charge. The right image shows a screenshot of the control app, which displays important parameters of the hardware and the recorded trials. Our app is compatible with all Android smartphones and tablets version 5.0 and above that have Bluetooth capabilities.
Figure 2. The left image of this figure shows the schematics for one photoelectric barrier. The RGB diode is used to show the status of the device and the voltage divider is used to measure the battery charge. The right image shows a screenshot of the control app, which displays important parameters of the hardware and the recorded trials. Our app is compatible with all Android smartphones and tablets version 5.0 and above that have Bluetooth capabilities.
Preprints 78781 g002

Figure 3. Our camera-based photoelectric barrier can be easily placed on a tripod. Unlike traditional systems, we don’t require a reflector, which greatly reduces setup time. Bluetooth communication eliminates the need for wires.
Figure 3. Our camera-based photoelectric barrier can be easily placed on a tripod. Unlike traditional systems, we don’t require a reflector, which greatly reduces setup time. Bluetooth communication eliminates the need for wires.
Preprints 78781 g003
Figure 4. Bland-Altman plots of the two experiments performed. Almost all measurement points are within the 95% limits of agreement, which shows that our system has similar accuracy to a professional photoelectric barrier system.
Figure 4. Bland-Altman plots of the two experiments performed. Almost all measurement points are within the 95% limits of agreement, which shows that our system has similar accuracy to a professional photoelectric barrier system.
Preprints 78781 g004

Table 1. Component list for a single camera-based photoelectric barrier. These are the prices we paid in online stores in Germany including VAT and shipping in 2023. We summarize materials such as wires, casing materials, and solder as miscellaneous. We use a high-capacity rechargeable 9V lithium-ion battery to ensure sufficient operating time.
Table 1. Component list for a single camera-based photoelectric barrier. These are the prices we paid in online stores in Germany including VAT and shipping in 2023. We summarize materials such as wires, casing materials, and solder as miscellaneous. We use a high-capacity rechargeable 9V lithium-ion battery to ensure sufficient operating time.
Component Price
Tripod 20 €
ESP32-Cam 9 €
Battery (Lithium Ion) 6 €
Buck Converter 1.50 €
Miscellaneous 1.50 €
Total 38 €
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated