Preprint
Article

Comparative Analysis of Telepresence Robots’ Video Performance: Evaluating Camera Capabilities for Remote Teaching and Learning

Altmetrics

Downloads

101

Views

14

Comments

0

A peer-reviewed article of this preprint also exists.

Submitted:

27 November 2023

Posted:

28 November 2023

You are already at the latest version

Alerts
Abstract
The COVID-19 outbreak demonstrated the viability of various remote working solutions, telepresence robots (TPRs) being one of them. High-quality video transmission is one of the cornerstones of using such solutions, as most of the information about the environment is acquired through vision. This study aims to compare the camera capabilities of four models of popular telepresence robots using compact reduced LogMAR and Snellen optometry charts as well as text displayed on the projector screen. The symbols from the images are extracted using OCR (Optical Character Recognition) software and the results of the recognition are compared with the symbols on the charts. Double 3 TPR provides the best quality images of optometric charts, but the OCR results of measurements of the image on the projector do not show the clear advantage of one single model over the others. The results demonstrated by Temi 2 and Double 3 TPRs are generally better than the others, suggesting that these TPRs are more feasible to be used in teaching and learning scenarios.
Keywords: 
Subject: Computer Science and Mathematics  -   Robotics

1. Introduction

Telepresence robots (TPRs) incorporate video conferencing equipment into a moving platform that can be controlled remotely. TPRs are getting increasingly popular in different domains, including education, where they are used to enable remote learning and students to participate in class from anywhere [1–3]. This can be especially beneficial for students unable to attend in-person classes due to illness or disability. TPRs also allow educators to reach a wider audience and provide access to education to students who may not have had the opportunity to attend in-person classes or are physically unable to be present in a location [1,4].
The quality of video streaming significantly affects the usability of TPRs. This is influenced by factors such as the camera used, network connection, and image processing algorithms [5]. In the context of mobile video streaming applications, video quality is a key element that affects usability, with factors such as bandwidth, network stability, and screen size playing a role [6]. Several studies suggest that TPRs offer acceptable video quality with SVGA (Super Video Graphics Array) resolution and a wide field of view, improved quality compared to fixed camera systems, and sufficient audio and video quality at certain data rates. Some authors highlight the need to consider both technical and user-related factors in the assessment [7].
One of the key characteristics that affect the usability of a TPR is its “eyesight” or the quality of video that it streams. The main factor that determines the quality of video is the camera used by the robot; however, this can be affected by other factors, including network connection, image-processing algorithms, and so on.
Comparing the image quality of video streams or still images require a common scale. In addition, as a substantial amount of information in the education domain is delivered using slides, it is logical to test the operator’s ability to read such text.
Optometrists use a variety of charts to test and evaluate a person's eyesight. These charts are used to determine a person's visual acuity, which is a measure of how clearly a person can see and we believe that the same method can be used to estimate and compare the quality of the robot’s video subsystem. LogMAR (Logarithm of the Minimum Angle of Resolution) and Snellen are among the most popular systems used to measure visual acuity, or the clarity of vision [8].
The Snellen chart invented in 1862 by a Dutch ophthalmologist named Herman Snellen is the most widely recognized eye chart [9] and is used to measure visual acuity from 20 feet. It consists of rows of letters that are progressively smaller as you move down the chart. The smallest line of letters that a person can read accurately is used to determine their visual acuity, which is expressed as a fraction, with the top number (numerator) representing the distance at which the test was taken (20 feet or approx. 6 meters) and the bottom number (denominator) representing the distance at which a person with normal vision can read the line. For example, 20/20 vision means that a person can see at 20 feet what a person with normal vision can see at the same distance [10].
The LogMAR chart, introduced in 1976, on the other hand is a more advanced system [11]. It uses a logarithmic scale to express visual acuity, with a higher score indicating worse visual acuity. The chart consists of letters that are progressively smaller as you move down the chart, and the smallest line of letters that a person can read accurately is used to determine their visual acuity, therefore LogMAR charts have replaced Snellen charts in ETDRS (Early Treatment of Diabetic Retinopathy Study) standard tests [11].
Overall, the LogMAR system is considered more accurate and precise than the Snellen system [3], as it considers the size and contrast of the letters on the chart and is not affected by the distance at which the test is taken. However, the Snellen chart is still widely used due to its simplicity and ease of use. Results obtained from the Snellen chart can be converted to LogMAR scale and vice versa either using conversion tables [12] or online tools [13].
Lim et al. demonstrated [11] that “similar acuity results were recorded from all three charts, suggesting a lack of a systematic bias as regards chart design. A small practice effect was observed for all charts but was greatest for Snellen and least for ETDRS".
Based on the above, this study aims to evaluate the visual capabilities of several TPR models in a test environment that mitigates the effect of secondary factors not directly related to the image acquisition system of TPRs, proving high-quality internet connection and good lighting conditions.
The study aims to answer the following research questions:
  • To what extent do the visual attributes of the examined TPRs facilitate their efficacy in supporting active engagement during academic activities such as school lessons and university lectures?
  • Which models of examined TPRs demonstrate better results in terms of streaming better-quality videos?
The paper is the following structure. Following this introduction, the paper will outline the materials and methods used in the study, including the measurement of visual acuity and text readability on a wall projector. The results of the study will then be presented, followed by a discussion of the findings. Finally, the paper will conclude with a summary of the key findings and implications for future research.

2. Materials and Methods

The experiments included four models of telepresence robots: Double 3 by Double Robotics [14], Ohmni by Ohmnilabs [15], Temi 2 and Temi 3 by Roboteam Home Technology [16], and consisted of two separate sets of measurements: (1) Measuring visual acuity using Snellen and LogMAR charts at a distance of 3 meters and with illuminance of approximately 600 lux, as recommended for these charts [10] -- ("It is recommended that VA assessment always be performed between 400 lx and 600 lx, as this limits any effect of illuminance change to 0.012 LogMAR"); and (2) Measuring the text readability on the wall projector in the class at the distances of 5 meters and 10 meters. Image acquisition was performed by taking screenshots with Windows Snipping tool in the lossless format PNG (Portable Network Graphics) [18,19] and using the robots’ integrated functions of storing still images from the camera, where available.
Data transfer from TPRs to the operator is conducted via the cloud service, therefore the quality of the internet connection plays a significant role in the experiment. The building where the experiment took place is connected to the internet using two channels with a bandwidth of 1 Gbit each. Robots are connected to a 2.4 GHz wireless network. The experiment took place on Sunday, with no other people present in the building and therefore with minimal interference.
Network connection speed was measured using the Speedtest [20] service, with a connection to Telia Eesti AS servers (Table 1).
Double 3 and Ohmni robots have integrated qualitative indicators to inform the user of the networking conditions, they both rated network connection as "good" (the highest category according to their scale).
The readability analysis of the images captured from the robots was performed using the Google Vision AI optical character recognition service [21].
Vision AI returns the result in JSON [22] format, and each recognized symbol is assigned a confidence coefficient. JSON file was parsed using the code from Appendix A and the line of text in the chart was considered not recognized if at least one of the following conditions were met:
  • Confidence interval of at least half of the symbols in the line was below 0.4,
  • Two or more symbols in the line were recognized incorrectly.
These conditions were found empirically, but uttermost precision is not required as the main goal of using computer OCR technology was to guarantee unbiased performance by applying the same algorithm to all the images.
Control of the robots and image acquisition was performed on HP Elitebook 620 G4 notebook computer running Microsoft Windows 11 Education 64-bit with screen resolution 1920x1080@60 Hz at maximum brightness and external 24" HP E24i G4 monitor, 1920x1080@60 Hz, at maximum brightness connected via DisplayPort. Microsoft Edge browser, version 108 (64-bit) and Photos app (preinstalled in Microsoft Windows) were used.
The robots had the following versions of software installed:
  • Double 3: head software 1.2.6, base firmware 30
  • Ohmni: v. 4.1.7-devedition, release track stable
  • Temi 2 and Temi 3: v. 128.12, firmware 20201216
While Temi has no still image saving functionality, the Double 3 and Ohmni robots have the functionality of taking screenshots in the following formats:
  • Ohmni – jpg lossy format [16];
  • Double 3 – jpg lossy format.

2.1. Measuring Visual Acuity

Although the original LogMAR test required 20 ft (6 m) distance between the patient and the chart, such spacious rooms are not always available, therefore scaled options were developed for the distances of 4 meters, 3 meters, 2.5 meters and 2 meters [24].
Two charts, one Snellen and one LogMAR, were printed out in accordance with the scale required for 3 meters distance measurements [9,25] using high-contrast printing mode. Although the charts' printouts size was scaled in accordance with the standards, a quick checkup was performed: one of the authors of the article wears glasses with -2.50 D on the right eye and -1.50 D on the left eye. In accordance with the Snellen chart specification, the person with -2.50D eyesight should be able to see 20/200 letters size (line 1) and with -1.50D 20/100 letters size (line 2). Additionally, with glasses on (normal eyesight) a person must be able to distinguish 20/20 symbols (line 8). Though keeping in mind that conversion of visual acuity into diopters is a far more complicated process, the checkup succeeded and the ability to distinguish the symbols on the chart corresponded to the expected result.
In most real-life cases the choice of the chart is affected not only by its quality but also by the qualifications of the personnel, ease of use, and time spent on measurements. However, for this experiment, there is no time limit for a single measurement, and conversion into diopters with a precision required for prescribing glasses is not required, therefore both Snellen and LogMAR charts are used in parallel for comparing OCR results that should give the same or similar values in accordance with the conversion table’s data.
The charts were placed on a white wall illuminated by fluorescent light bulbs. No considerable amount of natural light was present in the room. Illuminance was measured using Sauter SO 200K digital light meter with reading accuracy of ±3% rdg ± 0.5% f.s [26] with an average value of the measurements being with a value of 662 ± 6 lux. This illuminance distribution is considered even. The authors define deviation values of ±10 lux as acceptable and for as long as measurement results are within the acceptable interval, their variations will not be considered.
Fluorescent light bulbs have discrete emission spectrum with main peaks at 546 nm and 611 nm (Figure 1).
According to Sauter SO 200K lux meter technical specifications [26], its spectral sensitivity is sufficient for measuring illuminance in this wavelength range (Figure 2).
The illuminance was measured (Figure 3) both at the beginning and the end of the measurement session, and the disparity fell within the acceptable limits.
The color temperature was measured using Pocket Light Meter app by Nuwaster studios [27] on iPhone 13 Pro front camera (Figure 4).
Distance to the chart is measured by Duka LS-P laser distance measure with an error margin of 1 mm [29].
The authors define distance measurement deviation of ±10 cm (about 3.94 in) as acceptable and for as long as measurement results are within the acceptable interval.
Out of three robots, Double 3 is the only one with an adjustable height that can be changes withing the range of 120-150 cm (3.94-4.92 ft). Temi camera is located 94 cm (about 3.08 ft) above the floor and Ohmni camera is located 140 cm (about 4.59 ft) above the floor. Middle position was chosen – 120 cm that lies within Double 3 height range. Temi and Ohmni TPRs can tilt the camera for the charts to be in the center of the camera viewing field, which is important for measurements in zoom mode.

2.2. Measuring the Text's Readability on the Wall Projector

Using light projectors to demonstrate slides is a widely used scenario in education. As in general CCD (Charged-Coupled Device) and CMOS (Complementary Metal-Oxide-Semiconductor) cameras have a narrower dynamic range in comparison to human eyes, then it is important to understand if the robot operator can read the text from the slides.
As screen dimensions and room sizes may vary significantly, the use of standard optometric tables is no longer feasible, therefore it was decided to use a reference slide with black text on a white background and apply Google Vision AI OCR to obtain relative results for different models of the TPRs. The slide contained lines of black text on a white background and used Arial Bold font (Figure 5).
The letters for the slide were selected to represent different visual groups of symbols of the Latin alphabet: round letters, straight letters, diagonal letters, and curved letters.
Best practices for presenters recommend using at least 24pt [30,31] font size, but for the purpose of this experiment the slide contained font sizes in the range from 12 pt to 60 pt.
The room was equipped with a NEC P525WL laser projector with 1280 x 800 resolution and 5000 lumens brightness. The projector was placed at a distance of 5.3 meters from the wall.
The robots were placed at a distance of 5 meters and 10 meters from the screen and the measurements were made under two different conditions:
  • Light in the class is turned on,
  • Light in the class is turned off and the only source of light is the projector itself.

3. Results

The figures below represent the images taken by different models of TPRs. The image taken by Temi 2 robot is more blurred compared to the others. Though detailed, the image taken by Ohmni has noticeable artefacts caused by the image processing algorithms.
Sample images of the slides in a dark room captured by the Ohmni and Temi 3 TPRs reveal that both robots have sufficient video resolution to make the text readable. However, Ohmni fails to do so due to the camera exposure settings.
Figure 7. Sample images of the slides in the dark room.
Figure 7. Sample images of the slides in the dark room.
Preprints 91572 g007
The optometric charts measurement results are presented in Table 2. The fractions in the Snellen table should be interpreted as follows: the numerator (20) is the distance to the chart in feet, whereas the denominator is the distance at which a person with a normal eyesight must be able to read the letters of the same size. Therefore, only Double 3 demonstrated the results of an average person when using its zoom camera.
Table 3 contains the results of measurements of the readability of text on the slides with the ambient light switched off and on, at distances of 5 meters and 10 meters from the screen. In this case, a significant role is played by how the robot adjusts the sensitivity of the sensor. Although Temi 2 has a lower quality camera, its image processing algorithms seem to provide a better result compared to the later model.
Double 3 achieves good results due to the presence of a zoom lens, which allows the slide to take up most of the image so that data processing balances the result correctly.
Overall, none of the robots demonstrated a clear advantage, although Temi 2 and Double 3 results are superior.

4. Discussion

The quality of the final result is determined by two main parameters:
  • Camera hardware properties and quality,
  • Image processing algorithms implemented in software.
Image processing is a general term for different types of manipulation of digital images, including noise reduction, adjusting contrast, brightness, or sharpness, and applying numerous types of filters.
For example, Ohmni uses aggressive sharpening algorithms that distort the image and make the letters harder to distinguish. Although such image processing may add some benefits in other scenes, it clearly works as a disadvantage when reading text.
Images, with a wide range of light intensities from the darkest to the brightest (wide dynamic range) are still a challenge to camera sensors, therefore the text on the images obtained from the bright screen in the dark room is sometimes unreadable not due to a low camera resolution, but due to brightness adjustments made by TPR image processing algorithms. There are different exposure metering modes, such as center-weighted, matrix or spot metering with the first ones usually showing better results, but in such a particular case with a bright projector image in the dark environment, they result overexposure of the slide. Double 3 mitigates this problem partially by using its zoom camera. In that case, the slide takes up a larger part of the image resulting in better balance.
Ohmni has manual controls for contrast, sharpness, and exposure, but their usage is hardly feasible as lighting conditions may change, settings are unintuitive, and nobody of the experimental group was able to achieve significant improvement of the image quality by adjusting the settings manually.
Although the readability of the slides in general will depend not only on the size of the letters, but on the background color, contrast, and technical specifications of the projector, this experiment provides comparative results.
All TPRs demonstrate much worse readability results in situations where ambient light is turned off; therefore, it is preferable to keep the room lighted when showing slides.
Ohmni demonstrated a significantly better chart readability result when OCR was used on the still image taken by the TPR rather than a frame from the video stream. That is caused by the fact that though it has a front camera with ultra-high-definition sensor (3840x2160 pixels) it only uses its full resolution for taking still images and generally the robots do not stream video in full camera. Additionally, video resolution is adjusted dynamically depending on the connection quality. The content of Double 3 log-file (see Appendix B) indicates that even in high-speed network it would use 1152x720 pixels resolution for transmitting video, which is significantly lower than what its camera can provide.
Although hardly usable in real-time, high-resolution still images, give an opportunity to store the text with higher readability compared to the video frames.
This also means that the resolution of the monitor that TPR operator uses in most cases will not affect the readability of the text because the resolution of the video transmitted by the TPRs is lower than Full HD resolution that the vast majority of the common computer screens support.
Double 3 is the only robot that has two front cameras with different focal lengths (optical zoom). The other models offer digital zoom only, in other words, stretch the center of the image acquired by the camera. The presence of the second camera gives it a significant advantage over the other models, especially in visual acuity measurement using charts. Double 3 is the only robot that achieved the acuity of normal eyesight.
It should be noted that in real life the result would be influenced by a wide range of factors, including network connection quality and optimization algorithms implemented by the developers, location of the data centers, additional factors, sun glare and many more.
Although it was previously demonstrated [32] that with the increase of the illumination the readability of the text also increases, the illumination above 650 lux should not be expected in school or university class environment.
Whenever possible the situation with low ambient light condition should be avoided if one or several persons attend the class using TPR models under study.
This study can be further expanded by adding other models of TPRs and by introducing real-life situations, where students or teachers use a telepresence robot in the classroom. The preliminary results indicate that it is highly probable that reading the text from the monitor might be complicated or dependent on its brightness settings, or that image quality will be reduced significantly due to low-speed connection or long delay in data transmission.

5. Conclusions

The quality of text readability using telepresence robots is influenced by two main factors: camera hardware properties and image processing algorithms. Ohmni, Double 3, Temi 2, and Temi 3 TPRs are four different TPR models that were evaluated for text readability. It was found that Double 3 had the best overall readability, followed by Temi 3 and then the others. This is due to Double 3's dual-lens camera system, which provides better image quality and zoom capabilities. Ohmni's readability was hampered by its aggressive sharpening algorithms, which distorted the image and made the letters harder to distinguish. All TPR's image readability results were also affected by the exposure metering modes, which resulted in overexposed images in certain lighting conditions.
It was also found that using optical zoom, rather than digital zoom, resulted in better image quality and text readability. Additionally, taking still images, rather than using video frames, also improved text readability. This is because still images can be captured at a higher resolution than video frames. In general, it is preferable to keep the room lighted when using TPRs for presentations. This will help to ensure that the text is readable, and that the overall presentation quality is high.
The study on the camera capabilities of TPRs for remote teaching and learning has several implications for further research. While the study compared the camera quality of different TPR models, it would be valuable to investigate other factors that could influence the accuracy of OCR in these environments. Factors such as lighting conditions and the complexity of the displayed text could play a role in OCR performance. The study focused on OCR accuracy for optometry charts and text on a projector screen. Further research could explore the performance of OCR in other applications, such as real-time transcription of lectures or capturing whiteboard notes. It would be interesting to compare the OCR accuracy of TPRs with human vision under the same conditions. This could help determine the extent to which OCR can effectively replace human vision in remote teaching and learning scenarios. While the study focused on the technical aspects of TPRs, its crucial to consider user experience and acceptance. Research could explore factors such as user comfort, perceived effectiveness, and potential barriers to adoption of TPR-based remote learning. The study primarily involved controlled experiments, but further research could involve implementing TPRs in real-world educational settings to evaluate their effectiveness in enhancing teaching and learning outcomes.

Author Contributions

Conceptualization, A.T., J.L. and S.V.; methodology, A.T. and J.L.; software, A.T.; validation, A.T., J.L. and S.V.; formal analysis, S.V.; investigation, A.T.; resources, A.T.; data curation, A.T.; writing—original draft preparation, A.T.; writing—review and editing, A.T., J.L. and S.V.; visualization, A.T.; supervision, S.V. and J.L.; project administration, A.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study due to not involving sensitive or identifiable data about human subjects.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data available upon request to the first author.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. JSON File Parsing

import json
import itertools
 
def parse_json(file_path):
    with open(file_path, 'r', encoding="utf-8") as f:
 
        data = json.load(f)
    return data['fullTextAnnotation']['pages'][0]['blocks']
 
def get_symbols(blocks):
    for block in blocks:
        if block['blockType'] == "TEXT":
            paragraphs = (p['words'] for p in block['paragraphs'])
            words = itertools.chain.from_iterable(paragraphs)
            symbols = itertools.chain.from_iterable(w['symbols'] for w in words)
            yield from symbols
 
def main():
    blocks = parse_json('ocr-google.json')
    for symbol in (s for s in get_symbols(blocks) if s['confidence'] > 0.4):
        print(f"{symbol['confidence']}\n{symbol['text']}\n{'='*6}")
 
if __name__ == '__main__':
    main()

Appendix B. The Extract from Double 3 Log-File

externalPing fps frameWidth frameHeight bitrate audioBitrate videoBitrate
116 29 1152 720 3.36
29 640 480 0.257 0.012 0.246
111 30 1152 720 4.31
29 480 360 0.657 0.002 0.654
109 30 1152 720 3.93
30 640 480 0.768 0.002 0.766
111 30 1152 720 3.83
28 640 480 1.027 0.002 1.024
112 30 1152 720 4.41
30 640 480 1.01 0.002 1.007

References

  1. Leoste, J.; Virkus, S.; Kasuk, T.; Talisainen, A.; Kangur, K.; Tolmos, P. Aspects of Using Telepresence Robot in a Higher Education STEAM Workshop. Information Integration and Web Intelligence, 2022, 13635, 18–28. [CrossRef]
  2. Leoste, J.; Kikkas, K.; Tammemäe, K.; Rebane, M.; Laugasson, E.; Hakk, K. Telepresence Robots in Higher Education – The Current State of Research. Robotics in Education, 2022, 515, 124–134 [. [Google Scholar] [CrossRef]
  3. Botev, J.; Rodríguez Lera, F.J. Immersive Robotic Telepresence for Remote Educational Scenarios. Sustainability, 2021, 13, 4717 [. [Google Scholar] [CrossRef]
  4. Kasuk, T.; Virkus, S. Exploring the Power of Telepresence: Enhancing Education through Telepresence Robots. Inf. Learn. Sci., 2023. [CrossRef]
  5. Jahromi, H.Z.; Bartolec, I.; Gamboa, E.; Hines, A.; Schatz, R. You Drive Me Crazy! Interactive QoE Assessment for Telepresence Robot Control. 2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX), 2020, 1–6. [CrossRef]
  6. Hussain, A.; Mkpojiogu, E.; Kamal, F.M. Mobile Video Streaming Applications: A Systematic Review of Test Metrics in Usability Evaluation. Penerbit Univ. Tek. Malays. Melaka Press, 2016. [Online]. Available: Link.
  7. Vlahovic, S.; Mandurov, M.; Suznjevic, M.; Skorin-Kapov, L. Usability Assessment of a Wearable Video-Communication System. 2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX), 2020, 1–6. [CrossRef]
  8. Bailey, I.L.; Lovie, J.E. New Design Principles for Visual Acuity Letter Charts. Optom. Vis. Sci., 1976, 53(11), 740–745. [CrossRef]
  9. Snellen Eye Chart. Accessed: Jan. 11, 2023. [Online]. Available: Link.
  10. What Is 20/15 Vision & How Can Someone Get It? - NVISION Eye Centers. Accessed: Oct. 29, 2023. [Online]. Available: Link.
  11. Lim, L.-A.; Frost, N.A.; Powell, R.J.; Hewson, P. Comparison of the ETDRS logMAR, ‘compact reduced logMar’ and Snellen charts in routine clinical practice. Eye, 2010, 24(4), 673–677. [CrossRef]
  12. Johnson, C. Vision acuity chart conversion table. Accessed: Oct. 29, 2023. [Online]. Available: Link.
  13. Snellen - logMAR Visual Acuity Calculator. [Online]. Available: Link.
  14. Double Robotics homepage. Accessed: Jan. 11, 2023. [Online]. Available: Link.
  15. OhmniLabs homepage. Accessed: Jan. 11, 2023. [Online]. Available: Link.
  16. Roboteam Home Technology, Temi robot. Accessed: Jan. 11, 2023. [Online]. Available: Link.
  17. Tidbury, L.P.; Czanner, G.; Newsham, D. Fiat Lux: the effect of illuminance on acuity testing. Graefes Arch. Clin. Exp. Ophthalmol., 2016, 254(6), 1091–1097. [CrossRef]
  18. Portable Network Graphics (PNG): Functional specification. [Online]. Available: Link.
  19. PNG Documentation. [Online]. Available: Link.
  20. Speedtest homepage. Accessed: Jan. 11, 2023. [Online]. Available: Link.
  21. Google Vision AI homepage. Accessed: Jan. 11, 2023. [Online]. Available: Link.
  22. JSON RFC8259 format description. Accessed: Jan. 11, 2023. [Online]. Available: Link.
  23. Digital compression and coding of continuous-tone still images: Requirements and guidelines. Accessed: Oct. 29, 2023. [Online]. Available: Link.
  24. Sussex Vision International homepage, LogMAR charts. Accessed: Jan. 11, 2023. [Online]. Available: Link.
  25. LogMAR Eye Chart. Accessed: Jan. 11, 2023. [Online]. Available: Link.
  26. Sauter SO 200K luxmeter datasheet. Accessed: Jan. 11, 2023. [Online]. Available: Link.
  27. Pocket Light Meter App (Appstore). Accessed: Jan. 11, 2023. [Online]. Available: Link.
  28. iPhone 13 Pro technical specifications. Accessed: Jan. 11, 2023. [Online]. Available: Link.
  29. Xiaomi Duka LS-P high-precision infrared laser range finder technical specifications. Accessed: Jan. 11, 2023. [Online]. Available: Link.
  30. The Association of Research Libraries: PowerPoint Guidelines for Presenters. Accessed: Jan. 11, 2023. [Online]. Available: Link.
  31. Lucas, J. What font size should you use for your PowerPoint? Accessed: Jan. 11, 2023. [Online]. Available: Link.
  32. Ferris, F.L.; Sperduto, R.D. Standardized Illumination for Visual Acuity Testing in Clinical Research. Am. J. Ophthalmol., 1982, 94(1), 97–98. [CrossRef]
Figure 1. Fluorescent lighting spectrum peaks.
Figure 1. Fluorescent lighting spectrum peaks.
Preprints 91572 g001
Figure 2. Sauter SO 200K lux meter’s spectral sensitivity.
Figure 2. Sauter SO 200K lux meter’s spectral sensitivity.
Preprints 91572 g002
Figure 3. Measuring the illuminance of the visual acuity charts.
Figure 3. Measuring the illuminance of the visual acuity charts.
Preprints 91572 g003
Figure 4. Color temperature of the lamp and color temperature of the wall, where the charts are hanging,
Figure 4. Color temperature of the lamp and color temperature of the wall, where the charts are hanging,
Preprints 91572 g004
Figure 5. The reference slide content.
Figure 5. The reference slide content.
Preprints 91572 g005
Figure 6. Sample images of the optometry charts.
Figure 6. Sample images of the optometry charts.
Preprints 91572 g006
Table 1. Speedtest connection properties report.
Table 1. Speedtest connection properties report.
IP address Download, megabits Upload, megabits Latency, ms Server name Distance, miles Connection mode
193.40.xxx.xxx 311.17 267.43 3 Tallinn 0 multi
90.190.xxx.xxx 41.92 20.19 9 Tallinn 100 multi
Table 2. The results of the optometric measurements of LogMAR (lower values are better) and Snellen charts.
Table 2. The results of the optometric measurements of LogMAR (lower values are better) and Snellen charts.
Double Ohmni Temi 3 Temi 2
Snellen LogMAR Snellen LogMAR Snellen LogMAR Snellen LogMAR
Screenshot 20/50 0.4 20/70 0.5 - - - -
Maximum zoom 20/20 0.1 20/120 0.6 20/50 0.6 20/70 0.5
Table 3. Projector image (lower values are better, X means no text is readable).
Table 3. Projector image (lower values are better, X means no text is readable).
Robot model
Condition Double 3 Ohmni Temi 3 Temi 2
5 meters, lights on 20 28 20 24
5 meters, lights off 60 x 36 28
5 meters, screenshot 12 54 X none
10 meters, lights on, max zoom 12 14 44 24
10 meters, lights on 48 60 54 40
10 meters, lights off X X X 40
10 meters, lights on, max zoom 44 60 44 40
10 meters, screenshot 54 48 none none
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated