1. Introduction
Cobots operate in shared workspaces with humans, necessitating seamless interaction with their human counterparts [
1] (Faccio & Cohen, 2023). This interaction goes beyond mere task execution; it involves a dynamic exchange of information, coordination of efforts, and mutual understanding to optimize the synergies between human skills and robotic precision [
2] (Faccio et al. 2023).
Effective communication and collaboration between human workers and cobots are pivotal for maximizing the benefits of cobot integration. The need arises from the intricacies of shared tasks, where human operators and cobots must coordinate their actions to achieve optimal results [
3] (Liu et al. 2024). Moreover, as cobots are designed to be more adaptable and responsive to changes in the manufacturing environment, clear communication becomes the essential for successful task allocation, problem-solving, and overall operational efficiency [
4] (Papetti et al. 2023). Effective communication fosters a sense of cooperation, trust, and shared purpose, thereby enhancing the overall productivity and safety of the collaborative work environment [
5] (Gross & Krenn, 2023).
This paper focuses on human-cobot communication related to cobots’ assistive functions. Vocal communication can free a worker's hands and eyes, allowing them to perform tasks while receiving information or instructions through audio support [
6] (Moore & Urakami, 2022). This approach is preferred over augmented reality or head-mounted display-based support in many application areas today [
7] (Marklin et al. 2022). So, for assisting a worker performing manufacturing or assembly tasks, the vocal and audio communications are the preferred over other communication forms
This paper delves into strategies for optimizing vocal communication and collaboration between human workers and cobots. It shows that these strategies enhance productivity and safety in assistive work. This is its contribution to the transformative role of collaborative robotics in modern manufacturing. By addressing this critical nexus, we aim to guide future research endeavors and industry practices toward a better collaboration of human and cobots on the factory floor.
2. Literature Review
Human-cobot communication plays a crucial role in task allocation and safety in collaborative assembly systems [
8,
9] (Heydaryan et al. 2018; Petzoldt et al. 2023). Schmidbauer et al. (2023) [
10] also examined static vs. dynamic task allocation preferences of human workers and found that they preferred adaptive task sharing (ATS) to a static predetermined task allocation and reported increased satisfaction with the dynamic allocation. Workers are more likely to assign manual tasks to cobots, while preferring to handle cognitive tasks themselves [
10] (Schmidbauer et al. 2023). These dynamic characteristics of collaboration require good communication between humans and cobots. The cobot-human communications could take one of the following forms:
Text communications: The human user uses text for programming the cobot, and to send it commands, and information [
11] (Jacomini et al. 2023) the cobot may also use text to communicate messages, explanations, cautionary warnings and other information types [
12,
13] (Scalise et al. 2017; Kontogiorgos, 2023).
Visual communications: Graphical displays, augmented reality (AR), and vision systems are key components of visual interfaces that enable workers to comprehend and interpret information from cobots efficiently [
14,
15] (Zieliński et al. 2021; Pascher et al. 2022). Graphical displays can provide real-time feedback on cobot actions, task progress, and system status, enhancing transparency and situational awareness [
16] (Eimontaite et al. 2022). Augmented reality overlays digital information onto the physical workspace, offering intuitive guidance for tasks and aiding in error prevention [
17] (Carriero et al. 2023). Vision systems, equipped with cameras and sensors, enable cobots to recognize and respond to human gestures, further fostering natural and fluid interaction [
18] (Sauer et al. 2021).
Auditory communications: Human-cobot vocal communication has been a topic of intensive research in recent years [
19] (Ionescu & Schlund, 2023). Auditory cues are valuable in environments where visual attention may be divided or compromised [
20] (Turri et al. 2021). Sound alerts, spoken instructions, and auditory feedback mechanisms contribute to effective communication between human workers and cobots [
21] (Su et al. 2023). For instance, audible signals can indicate the initiation or completion of a task, providing workers with real-time information without requiring constant visual focus [
22] (Tran, 2020). Speech recognition technology enables cobots to understand verbal commands, fostering a more intuitive and dynamic interaction [
23] (Telkes et al. 2024). Thoughtful use of auditory interfaces between humans and cobots, helps create a collaborative environment where information is conveyed promptly, enhancing overall responsiveness and coordination [
24] (Salehzadeh et al, 2022). Several recent papers have proposed novel interfaces and platforms to facilitate this type of interaction. Rusan and Mocanu (2022) [
25] introduced a framework that detects and recognizes speech messages, converting them into spoken commands for operating system instructions. Carr, Wang, and Wang (2023) [
26] proposed a network-independent verbal communication platform for multi-robot systems, which can function in environments lacking network infrastructures. McMillan et al. (2023) [
27] highlighted the importance of conversation as a natural way of communication between humans and robots, promoting inclusivity in human-robot interaction. Lee et al. (2023) [
28] conducted a user study to understand the impact of robot attributes on team dynamics and collaboration performance, finding that vocalizing robot intentions can decrease team performance and perceived safety. Ionescu & Schlud (2021) [
29] found that voice activated cobot programming is more efficient than typing and other programming techniques. These papers collectively contribute to the development of human-robot vocal communication systems and highlight the challenges and opportunities in this field.
Tactile communications: The incorporation of tactile feedback mechanisms enhances the haptic dimension of human-cobot collaboration [
30] (Sorgini et al. 2020). Tactile interfaces, such as force sensors and haptic feedback devices, enable cobots to perceive and respond to variations in physical interactions [
31] (Guda et al. 2022). Force sensors can detect unexpected resistance, triggering immediate cessation of movement to prevent collisions or accidents [
32] (Zurlo et al. 2023). Haptic feedback devices provide physical sensations to human operators, conveying information about the cobot's state or impending actions [
33] (Costes & Le-cuyer, 2023). This tactile dimension contributes to a more nuanced and sophisticated collaboration, allowing for a greater degree of trust and coordination between human workers and cobots.
Human-cobot communication plays a crucial role in task allocation and safety in collaborative assembly systems [
34,
35] (Keshvarparast et al 2023; Liu et al. 2024). Schmidbauer et al. (2023) also examined static vs. dynamic task allocation preferences of human workers and found that they preferred adaptive task sharing (ATS) to a static predetermined task allocation and reported increased satisfaction with the dynamic allocation. Workers are more likely to assign manual tasks to cobots, while preferring to handle cognitive tasks themselves [
10] (Schmidbauer et al. 2023).
Human-robot communication can involve various modes, including verbal communication using speech, multimodal interaction involving head movements, eye gaze, and pointing gestures, and nonverbal cues such as facial expressions and hand gestures [
27,
36] (McMillan et al. 2023; Schreiter et al. 2023). Multimodal interaction, which combines different modalities, leads to more natural fixation behavior, improved engagement, and faster reaction to instructions in collaborative tasks [
37] (Rautiainen et al. 2022). In addition to verbal communication, nonverbal cues play a crucial role in human-robot interaction, allowing for a more natural and inviting experience [
38] (Urakami & Seaborn, 2023). The concept of multimodal interaction in human-robot applications acknowledges the synergistic use of different interaction methods, enabling a richer interpretation of human-robot interactions [
39] (Park et al. 2021). However, multimodal interaction is not only very costly, it also take the human attention off the task at hand [
39,
40] (Park et al. 2021; Nagrani et al. 2021).
3. Main Assistive Scenarious
Cobots can assist an industrial worker in many ways and in many types of tasks. Based on the literature, we define main scenarios that cover large part of these tasks (Javaid et al. 2022) [
41] , as follows:
Standard pick and place (known locations): there may be several different such tasks, and their names and locations should be clearly defined for the human and cobot.
Fetch distinct tool/part/material: visual search may be needed for identifying the object, and corresponding grasping strategy.
Return tool/part/material: visual search may be needed for identifying the placing location, and corresponding reach and align strategy.
Dispose defective tool/part/material: Identifying the correct dispose location is necessary.
Standard “Turn” object”: Identifying the object and pre-defining the meaning of “Turn” may be necessary.
Turn Object degrees counter/clockwise: Identifying the object for planning the reach, grasp & turn trajectory.
Move and align
Drill: location must be pre-defined
9Screw: location must be pre-defined
Solder/glue-point : location must be pre-defined
Inspect: : location must be pre-defined
Push/Press: location must be pre-defined
Pull/Detach: location must be pre-defined
Hold (reach and grasp): Identifying the object for planning the reach, &grasp trajectory.
These 14 scenarios could benefit form the strategies described in section 4.
4. Main Strategies for Optimizing Vocal Communication
In this section we identify various strategies the enhance vocal cobot related communication.
The relationship between these strategies and the 14 scenarios (from section 3) are summarized in
Table 1. Our proposed strategies are listed as follows:
Workstation map: generate a map of the workstation with location identifying labels of various important points that the cobot arm may need to reach. Store the coordinates of each point in the cobot’s control system and hang the map in front of the worker.
Dedicated space for placing tools and parts (for both the worker and the cobot): Dedicate a convenient place, close to the worker, for the cobot to place or take tools or parts or materials that the worker asked for. This place could serve several tools or parts to be placed from left to right and top to bottom. Store the coordinates of the place in the cobot’s control system, and the worker must be informed about tis place and understand the expected trajectory of the cobot for reaching this place.
Dedicated storage place for tool/part: Dedicate a unique storage place for each tool and for part supply, that would be easy to reach for both the human worker and the cobot.
Define names for mutual use: Make sure both cobot and workers are using the same name for each tool and for each part.
Define predefined trajectories of the cobot from tool/part areas to the placement area. Thus, the worker knows what movements to expect from the cobot.
These strategies generate a common language for the cobot and the worker, related to locations and objects. They create both clarity and conciseness. Moreover, the trajectory strategy creates worker understanding and anticipation for the cobot moves.
Note that these five strategies appear in
Table 1. In abbreviated form as column titles. Each column abbreviated title is explained in the legend under
Table 1.
5. Discussion
The results presented in
Table 1 underscore the significance of clear communication channels between human workers and cobots in collaborative work environments. Defining names for places, objects, actions, and trajectories emerges as a pivotal factor for efficient communication bidirectionally. The cobot's ability to convey predetermined path trajectories to human workers, elucidating where and how movements are expected to unfold, proves crucial and applicable across all cobot activities. The strategy of constructing a comprehensive workstation map, delineating places and objects for both human and cobot, is identified as another crucial element for successful communication.
Furthermore, the naming strategy generates a common language, fostering a shared understanding of the work environment between human and cobot. This shared understanding is pivotal for the optimization of cobot-assisted tasks. Likewise, the predefined trajectories strategy not only aids in clarity but also facilitates worker anticipation of cobot movements, thereby enhancing overall collaboration efficiency.
The strategies outlined in
Table 1 collectively contribute to a structured and efficient communication framework between human workers and cobots. They establish clarity and conciseness, mitigating potential misunderstandings and streamlining the collaborative process. Moreover, these strategies create a sense of order and predictability in the shared workspace, further enhancing the safety and productivity of the collaborative work environment.
While computerized retrieval of names of places and objects, maps, and trajectories is an instantaneous reliable action, the human worker has to rely on his/her memory. This means that worker’s training must precede the work with a cobot. The importance of comprehensive training programs for human workers is evident, ensuring a smooth transition to working alongside cobots. As the data in
Table 1 emphasizes, a well-defined communication protocol is crucial, and worker training is integral to this process. Comprehensive training programs should cover not only technical aspects but also the nuances of communication and collaboration in a human-cobot setting.
Augmented Reality (AR) applications could be a preferred alternative to workstation maps (or part of such map) and emerge as promising tools for communication and collaboration. AR could also be used to simulate collaborative scenarios and prepare workers for real-world interactions with cobots. AR technology provides immersive training experiences that can enhance worker preparedness and familiarize them with cobot-assisted tasks in a controlled, virtual environment.
Safety measures are of paramount importance in human-cobot collaboration. Addressing concerns such as collision avoidance, force sensing, and emergency stop mechanisms is crucial to prevent accidents and ensure a secure working environment. Implementing safety standards and regulations is essential for creating guidelines that govern human-cobot interaction, promoting a safe and productive collaborative workspace.
In conclusion, this discussion highlights the critical role of effective communication and collaboration in human-cobot interactions. The identified strategies provide practical insights for optimizing vocal communication, ultimately enhancing productivity and safety in cobot-assisted tasks. Continuous research and development efforts are vital to staying abreast of evolving technologies and refining communication strategies, paving the way for successful integration of cobots into assembly processes. This manuscript contributes to this dynamic field by providing a comprehensive exploration of strategies and insights, serving as a guide for future research and industry practices in the realm of collaborative robotics.
6. Conclusions
In summary, this study illuminates the pivotal role of vocal communication in augmenting productivity and safety in cobot-assisted tasks. The identified strategies, emphasizing clarity through defined names and trajectories, establish a shared language that enhances task efficiency and fosters a secure working environment. Acknowledging the significance of comprehensive worker training and the potential of Augmented Reality (AR) applications, this research underscores the need for a holistic approach to human-cobot collaboration.
This manuscript contributes to the evolving field of human-robot collaboration by providing a comprehensive exploration of strategies to enhance productivity and safety in assembly environments. By synthesizing theoretical frameworks, practical recommendations, and empirical evidence, this work aims to guide future research and industry practices in the dynamic landscape of collaborative robotics.
Future research should delve into refining communication protocols, exploring advanced technologies for immersive worker training, and addressing evolving safety concerns. Investigating the impact of these strategies in diverse industry settings and scaling up their applicability will contribute to the continual evolution of collaborative robotics. As technology advances, understanding the dynamics of human-robot interactions remains crucial for unlocking the full potential of cobots in Industry 5.0. This study serves as a foundation, urging researchers and practitioners to further explore and implement innovative solutions for a seamless integration of cobots into modern manufacturing.
References
- Faccio, M.; Cohen, Y. Intelligent cobot systems: human-cobot collaboration in manufacturing. Journal of Intelligent Manufacturing 2023, 1–3. [Google Scholar] [CrossRef]
- Faccio, M.; Granata, I.; Menini, A.; Milanese, M.; Rossato, C.; Bottin, M.; Rosati, G. Human factors in cobot era: A review of modern production systems features. Journal of Intelligent Manufacturing 2023, 34, 85–106. [Google Scholar] [CrossRef]
- Liu, L.; Schoen, A.J.; Henrichs, C.; Li, J.; Mutlu, B.; Zhang, Y.; Radwin, R.G. Human robot collaboration for enhancing work activities. Human Factors 2024, 66, 158–179. [Google Scholar] [CrossRef] [PubMed]
- Papetti, A.; Ciccarelli, M.; Scoccia, C.; Palmieri, G.; Germani, M. A human-oriented design process for collaborative robotics. International Journal of Computer Integrated Manufacturing 2023, 36, 1760–1782. [Google Scholar] [CrossRef]
- Gross, S.; Krenn, B. A Communicative Perspective on Human–Robot Collaboration in Industry: Mapping Communicative Modes on Collaborative Scenarios. International Journal of Social Robotics 2023, 1–18. [Google Scholar] [CrossRef]
- Moore, B.A.; Urakami, J. The impact of the physical and social embodiment of voice user interfaces on user distraction. International Journal of Human-Computer Studies 2022, 161, 102784. [Google Scholar] [CrossRef]
- Marklin, R.W., Jr.; Toll, A.M.; Bauman, E.H.; Simmins, J.J.; LaDisa, J.F., Jr.; Cooper, R. Do Head-Mounted Augmented Reality Devices Affect Muscle Activity and Eye Strain of Utility Workers Who Do Procedural Work? Studies of Operators and Manhole Workers. Human factors 2022, 64, 305–323. [Google Scholar] [PubMed]
- Heydaryan, S.; Suaza Bedolla, J.; Belingardi, G. Safety design and development of a human-robot collaboration assembly process in the automotive industry. Applied Sciences 2018, 8, 344. [Google Scholar] [CrossRef]
- Petzoldt, C.; Harms, M.; Freitag, M. Review of task allocation for human-robot collaboration in assembly. International Journal of Computer Integrated Manufacturing 2023, 1–41.–41. [Google Scholar] [CrossRef]
- Schmidbauer, C.; Zafari, S.; Hader, B.; Schlund, S. An Empirical Study on Workers' Preferences in Human–Robot Task Assignment in Industrial Assembly Systems. IEEE Transactions on Human-Machine Systems 2023, 53, 293–302. [Google Scholar] [CrossRef]
- Jacomini Prioli, J.P.; Liu, S.; Shen, Y.; Huynh, V.T.; Rickli, J.L.; Yang, H.J.; Kim, K.Y. Empirical study for human engagement in collaborative robot programming. Journal of Integrated Design and Process Science 2023, 1–23. [Google Scholar] [CrossRef]
- Scalise, R.; Rosenthal, S.; Srinivasa, S. Natural language explanations in human-collaborative systems. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction; 2017; pp. 377–378. [Google Scholar]
- Kontogiorgos, D. Utilising Explanations to Mitigate Robot Conversational Failures. arXiv 2023, arXiv:2307.04462. [Google Scholar]
- Zieliński, K.; Walas, K.; Heredia, J.; Kjærgaard, M.B. A Study of Cobot Practitioners Needs for Augmented Reality Interfaces in the Context of Current Technologies. In Proceedings of the 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN); 2021; pp. 292–298. [Google Scholar]
- Pascher, M.; Kronhardt, K.; Franzen, T.; Gruenefeld, U.; Schneegass, S.; Gerken, J. My Caregiver the Cobot: Comparing Visualization Techniques to Effectively Communicate Cobot Perception to People with Physical Impairments. Sensors 2022, 22, 755. [Google Scholar] [CrossRef] [PubMed]
- Eimontaite, I.; Cameron, D.; Rolph, J.; Mokaram, S.; Aitken, J.M.; Gwilt, I.; Law, J. Dynamic graphical instructions result in improved attitudes and decreased task completion time in human–robot co-working: an experimental manufacturing study. Sustainability 2022, 14, 3289. [Google Scholar] [CrossRef]
- Carriero, G.; Calzone, N.; Sileo, M.; Pierri, F.; Caccavale, F.; Mozzillo, R. Human-Robot Collaboration: An Augmented Reality Toolkit for Bi-Directional Interaction. Applied Sciences 2023, 13, 11295. [Google Scholar] [CrossRef]
- Sauer, V.; Sauer, A.; Mertens, A. Zoomorphic gestures for communicating cobot states. IEEE Robotics and Automation Letters 2021, 6, 2179–2185. [Google Scholar] [CrossRef]
- Ionescu, T.B.; Schlund, S. Programming cobots by voice: a pragmatic, web-based approach. International Journal of Computer Integrated Manufacturing 2023, 36, 86–109. [Google Scholar] [CrossRef]
- Turri, S.; Rizvi, M.; Rabini, G.; Melonio, A.; Gennari, R.; Pavani, F. Orienting auditory attention through vision: the impact of monaural listening. Multisensory Research 2021, 35, 1–28. [Google Scholar] [CrossRef] [PubMed]
- Su, H.; Qi, W.; Chen, J.; Yang, C.; Sandoval, J.; Laribi, M.A. Recent advancements in multimodal human–robot interaction. Frontiers in Neurorobotics 2023, 17, 1084000. [Google Scholar] [CrossRef]
- Tran, N. Exploring mixed reality robot communication under different types of mental workload. 2020-Mines <italic>Theses & Dissertations</italic>. 2020.
- Telkes, P.; Angleraud, A.; Pieters, R. Instructing Hierarchical Tasks to Robots by Verbal Commands. In Proceedings of the 2024 IEEE/SICE International Symposium on System Integration (SII); 2024; pp. 1139–1145. [Google Scholar]
- Salehzadeh, R.; Gong, J.; Jalili, N. Purposeful Communication in Human–Robot Collaboration: A Review of Modern Approaches in Manufacturing. IEEE Access 2022, 10, 129344–129361. [Google Scholar] [CrossRef]
- Rusan, H.A.; Mocanu, B. Human-Computer Interaction Through Voice Commands Recognition. In Proceedings of the 2022 International Symposium on Electronics and Telecommunications (ISETC); 2022; pp. 1–4. [Google Scholar]
- Carr, C.; Wang, P.; Wang, S. A Human-friendly Verbal Communication Platform for Multi-Robot Systems: Design and Principles. In UK Workshop on Computational Intelligence; Springer Nature: Cham, Switzerland, 2023; pp. 580–594. [Google Scholar]
- McMillan, D.; Jaber, R.; Cowan, B.R.; Fischer, J.E.; Irfan, B.; Cumbal, R.; Lee, M. Human-Robot Conversational Interaction (HRCI). In Proceedings of the Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction; 2023; pp. 923–925. [Google Scholar]
- Lee, K.M.; Krishna, A.; Zaidi, Z.; Paleja, R.; Chen, L.; Hedlund-Botti, E.; Gombolay, M. The effect of robot skill level and communication in rapid, proximate human-robot collaboration. In Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction; 2023; pp. 261–270. [Google Scholar]
- Ionescu, T.B.; Schlund, S. Programming cobots by voice: A human-centered, web-based approach. Procedia CIRP 2021, 97, 123–129. [Google Scholar] [CrossRef]
- Sorgini, F.; Farulla, G.A.; Lukic, N.; Danilov, I.; Roveda, L.; Milivojevic, M.; Bojovic, B. Tactile sensing with gesture-controlled collaborative robot. In Proceedings of the 2020 IEEE International Workshop on Metrology for Industry 4.0 & IoT; 2020; pp. 364–368. [Google Scholar]
- Guda, V.; Mugisha, S.; Chevallereau, C.; Zoppi, M.; Molfino, R.; Chablat, D. Motion strategies for a cobot in a context of intermittent haptic interface. Journal of Mechanisms and Robotics 2022, 14, 041012. [Google Scholar] [CrossRef]
- Zurlo, D.; Heitmann, T.; Morlock, M.; De Luca, A. Collision Detection and Contact Point Estimation Using Virtual Joint Torque Sensing Applied to a Cobot. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA); 2023; pp. 7533–7539. [Google Scholar]
- Costes, A.; Lécuyer, A. Inducing Self-Motion Sensations with Haptic Feedback: State-of-the-Art and Perspectives on “Haptic Motion”. IEEE Transactions on Haptics, 2023. [Google Scholar]
- Keshvarparast, A.; Battini, D.; Battaia, O.; Pirayesh, A. Collaborative robots in manufacturing and assembly systems: literature review and future research agenda. Journal of Intelligent Manufacturing 2023, 1–54. [Google Scholar] [CrossRef]
- Liu, L.; Guo, F.; Zou, Z.; Duffy, V.G. Application, development and future opportunities of collaborative robots (cobots) in manufacturing: A literature review. International Journal of Human–Computer Interaction 2024, 40, 915–932. [Google Scholar] [CrossRef]
- Schreiter, T.; Morillo-Mendez, L.; Chadalavada, R.T.; Rudenko, A.; Billing, E.; Magnusson, M.; Lilienthal, A.J. Advantages of Multimodal versus Verbal-Only Robot-to-Human Communication with an Anthropomorphic Robotic Mock Driver. In Proceedings of the 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN); 2023; pp. 293–300. [Google Scholar]
- Rautiainen, S.; Pantano, M.; Traganos, K.; Ahmadi, S.; Saenz, J.; Mohammed, W.M.; Martinez Lastra, J.L. Multimodal interface for human–robot collaboration. Machines 2022, 10, 957. [Google Scholar] [CrossRef]
- Urakami, J.; Seaborn, K. Nonverbal Cues in Human–Robot Interaction: A Communication Studies Perspective. ACM Transactions on Human-Robot Interaction 2023, 12, 1–21. [Google Scholar] [CrossRef]
- Park, K.B.; Choi, S.H.; Lee, J.Y.; Ghasemi, Y.; Mohammed, M.; Jeong, H. Hands-free human–robot interaction using multimodal gestures and deep learning in wearable mixed reality. IEEE Access 2021, 9, 55448–55464. [Google Scholar] [CrossRef]
- Nagrani, A.; Yang, S.; Arnab, A.; Jansen, A.; Schmid, C.; Sun, C. Attention bottlenecks for multimodal fusion. Advances in neural information processing systems 2021, 34, 14200–14213. [Google Scholar]
- Javaid, M.; Haleem, A.; Singh, R.P.; Rab, S.; Suman, R. Significant applications of Cobots in the field of manufacturing. Cognitive Robotics 2022, 2, 222–233. [Google Scholar] [CrossRef]
Table 1.
|
Related Strategies |
Main Scenarios
|
1 Map |
2 Placing Space |
3 Tool/part Storage |
4 Defined Names |
5 Path |
Pick & place |
V |
|
|
V |
V |
Fetch |
V |
V |
V |
V |
V |
Return |
V |
V |
V |
V |
V |
Dispose |
V |
V |
|
V |
V |
Turn predefined |
|
|
|
V |
V |
Turn degree |
|
|
|
V |
V |
Align |
V |
|
|
V |
V |
Drill |
V |
|
|
V |
V |
Screw |
V |
|
V |
V |
V |
Solder |
V |
|
|
V |
V |
Inspect |
|
V |
|
V |
V |
Push/press |
V |
|
|
V |
V |
Pull/detach |
V |
|
|
V |
V |
Hold |
|
|
|
V |
V |
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).