1. Introduction
Vision systems are one of the critical elements of modern automation and robotization processes. They have gained great recognition due to their significant capabilities in collecting and analyzing process data. The process of collecting and exchanging data between devices is the basic assumption of the concept of industry 4.0. together with integrating various systems based on recent achievements in many areas of science and technology, such as robotics, vision systems, machine learning, deep learning, and data processing [
1,
2]. Vision systems, robots/cobots and autonomous mobile robots constitute essential components of the Industry 4.0 paradigm and smart manufacturing and point out its future [
3,
4,
5].
In general, vision system consists of following components: illumination system, lens, cameras, optionally video signal acquisition card ( frame-grabbers) , computer with appropriate software, communication systems with peripheral devices, and additional sensors.
Vision technology (2D) available in the market can be categorized based on the type of devices utilized and the specific requirements [
6]:
Vision sensors are characterized by their ease of use and flexibility, serving as an integrated image processing system combined with optical components and lighting. Additionally, they are equipped with various communication interfaces enabling data exchange with external devices. They are primarily used for low-complexity image analysis tasks.
Smart cameras integrate CPU and image sensor functions with their own operating systems, enabling onboard image processing. They are primarily used for object localization, character recognition, and reading one- and two-dimensional barcodes. Smart cameras are often classified as the type of vision sensors with a limited and defined capacity range. They are also more flexible and equipped with software tools offering numerous processing and image analysis algorithms. Their processing units can be equipped with Intel or Motorola microprocessors, DSP signal processors, FPGA devices, or combinations. They are also provided with serial interfaces and digital in/out cards.
Embedded vision systems are furnished with specialized libraries for real-time image analysis, predominantly engineered and deployed based on FPGA and DSP architectures. Embedded vision systems are equipped with specialized libraries for real-time image analysis. They are designed and implemented primarily using FPGA and DSP devices. These systems offer high computation speed, ease of use, and low cost. They typically include libraries of basic algorithms dedicated to real-time image processing and video data manipulation. While they offer limited flexibility due to their application-specific nature, embedded vision systems are increasingly utilized in autonomous vehicles, drones, smart traffic devices, and IoT devices. They allow autonomous decision-making based on vision data by integrating image processing and recognition algorithms within these devices. This is made possible by integrating embedded cameras (e.g., equipped with MIPI , GMSL interfaces) with high-end processors like NVIDIA Jetson, NXP i.MX8, Raspberry Pi 4, Google coral, Xilinx , etc.
PC-based vision systems offer an independent selection of optimal elements for the implemented process, providing greater flexibility and the ability to utilize multiple cameras and complex image analysis algorithms. They distinguish themselves from smart camera vision sensors by allowing exploiting more than one camera and providing higher computational power for sophisticated vision algorithms (especially when equipped with a high performance industrial computer).
PC-based vision systems are suitable for a wide range of applications, including those requiring complex tasks such as system calibration, optical character recognition, code reading, counting and measuring, gauging and metrology, and object recognition based on deep learning.
In a broader sense, vision systems belong to the interdisciplinary field of science and technology because they encompass optics, photography, electronics, mathematics, computer technology, computer science, and artificial intelligence. In the manufacturing and automation process, they are widely tailored to various robot tasks/applications, allowing the robot to analyze, make decisions, and interact with the environment effectively.
Designing a vision system within a robotic station is related to several important areas of knowledge in image processing, such as camera calibration, camera calibration with a robot, image preprocessing, image segmentation, analysis and recognition, shape detection, and vision-based measurements.
The main purpose of the camera calibration process is to determine the internal and external parameters of the camera [
8,
9,
10,
11,
12]. The need to carry out the calibration process is mainly related to the necessity to remove distortions introduced by the camera’s optical path, the possibility of measurements in SI units, and to obtain the model and position of the camera with reference to the given coordinate system [
8,
9,
10,
11,
13]. System calibration is the most crucial step in a vision-based measurement system and is essential when metric data is required [
10,
14]. Calibration of a camera with a robot [
11,
12,
13,
15] can be classified into Eye-to-hand (camera placed on the station) and Eye-in-hand (camera mounted on the robot). As a result of the calibration procedure, a transformation is obtained that maps the system from the camera’s image space to the coordinate system associated with the robot [
11,
12,
13,
15].
Image segmentation is aimed at dividing the image into subareas with similar features. These subareas enable, at a later stage, the extraction and isolation of various structures and objects in the image [
16,
17,
18].
Recognition of image objects is a field dominated in recent years by deep learning algorithms. However, initially, three basic strategies were commonly used: SIFT, SURF, and BRIEF. In the works [
19,
20], the authors compared the basic algorithms for detecting and matching features available in the OpenCV libraries. In the first case, as part of experimental research, and in the second, as one of the tasks performed by a robotic station.
Vision measurements enable the determination of objects’ geometric features [
14,
21,
22,
23,
24]. In the field of image processing, one can find many works showing the practical use of the discussed issues. An example of such applications in identifying image features is presented [
25], where an original algorithm based on the ORB algorithm concerning image processing on FPGA systems is discussed. Another work [
26] focuses on assessing using SIFT and SURF feature detectors for analyzing underwater images. The work [
27] addresses using Fast R-CNN fast convolutional networks to develop a waste identification system in a garbage sorting plant. The system is complemented by a robotic line that performs the sorting process. The area of application of vision systems in robotics employing image processing methods is the automated identification of positions of workcell components with Aruco markers and OpenCV library [
28]. Another group of works covering the issue of using vision systems in robotics are applications built based on the ROS (Robot Operating System) environment engaging OpenCV libraries and C++ or Pyhton languages [
29,
30,
31,
32].
Combining vision techniques with a robot means equipping the robot with a complex sensor mechanism that allows intelligent reactions to events occurring in the machine’s surroundings. The employment of vision systems and other sensors is caused by a constant need to increase flexibility, improve production quality and efficiency of production processes, and increase the application range in robotics. According to market report [
60], the most often application of robotic vision systems, besides measurement, inspection and testing encompass: material handling; welding and soldering; assembling and disassembling; packaging and palletizing; painting; cutting, pressing, griding, and deburring. MRFR forecasts point out the global robotic vision market will attain USD 9 billion and grow at CAGR of 12% in the period 2020-2027 [
60].
Robot vision is a process of extracting, identifying, and interpreting the information obtained from a 3D scene. Two- and three-dimensional robot vision systems belong to standard non-contact measurement systems for object localization and identification in the robot workspace. It provides accurate information about parts’ position, orientation, and location changes [
32,
33,
34] as well as the robot’s location [
35,
36] . The main merit of robot vision systems is their ability to intelligently locate and recognize parts in 3D space by means of one or more cameras. Cameras can be installed either stationary (permanently mounted above the robot workspace) or mobile - installed on the robot arm. Thus, the calibration of the vision system with a robot becomes a key element [
10,
11,
12,
13,
15]. Therefore, two types of robot vision systems are to be distinguished: dedicated to the specific robot type (i.e., Omron/ACE Sight [
37], Fanuc/iRVision [
38]) and of general use (i.e., Cognex, Keyence, Omorn, Matrox, National Instruments, Stemmer-Imaging, Sick, etc.). In the case of dedicated systems embedded in the robot controller, calibration takes place in one operating system and programming environment reserved to a given robot manufacturer ( i.e., ACE Sight[
37], iRVision[
38]). Meanwhile, for general-purpose systems, the calibration procedure is performed by the software modules of the vision system in the robot’s programming environment through special software equipped with user-friendly interfaces, installed on the robot controller, or installed on an external computer. For instance, Cognex vision URCap packages are software extensions for the Universal Robot system and integrate into PolyScope, the graphical programming interface of Universal Robots. URCaps packages aim to extend any Universal Robot seamlessly with customized functionality [
39] .
Vision systems allow to perform complex tasks that accompany mobile robots, and they perform those tasks with a high level of autonomy, reliability of performed operations, and interaction with the operator [
40,
41].They enable dynamic environment exploration, navigation/guiding [
42], map building [
43], and vision control of various tracking systems [
44].
Employment of vision systems enables the detection of product flaws in the initial production stage, while identification of their cause eliminates defective products on the spot. In the realm of industrial automation, vision systems are most commonly used in the verification and quality control of various products for instance, in measurement, shape control and product sorting food products [
45,
46], control of the surface flatness of welded aluminum bodies [
46], quality inspection of slate slabs involving detection of surface defects [
48], quality control of clutch friction disc by detection flaws [
49], quality control of bearing by inspection of bearing surfaces and defect detection [
50] , inspection of dimensions and material defects on the wood surface [
51], developing vision system for measurement and inspection of bolt employing CNN [
52], surface quality inspection of mobile phone back glass based on Deep Learning framework
[53], in an automatic method of positioning a vision system for quality control of washing machine parts on production line[
54], in yarn quality control identifying yarn defect called nep as an example of application in wider yarn spinning industry[
55], inspection a hole in industrial robot satellite assembly systems [
56]. A comprehensive review of vision systems applications for product quality control is presented in [
57,
58].
As mentioned above, in industrial robotic stations, vision systems are mainly used to determine the position and orientation of objects in the robot’s working space. The hardware and software architecture proposed in this paper, and the developed algorithms extend existing quality control applications with the process of analysis of moving objects, detection based on shape, and measurement of fundamental geometric quantities in the working space of the UR5 cobot equipped with a vision system consisting of two different and independent cameras based on a PC implemented in Python using OpenCV libraries.
The paper aims to present the developed vision system and work algorithm for a robotic quality control station based on the UR5 CB2 collaborative robot from Universal Robots, integrated with the SAVIO CAK-01 web camera and the Mako G-125B stationary industrial camera. The developed algorithms for the software part of the vision system were based on the openCV library and, together with the control algorithm and the communication module between the cobot, the SCARA robot, and the mobile autonomous robot, were implemented in Python and the PolyScope environment. The level of measurement noise, accuracy, and repeatability of the developed vision system were estimated and analyzed. The final part of the article briefly discusses the methodologies for integrating the developed application with a robotic station and a robotic line . The proposed hardware and software architecture of the robotic station, consisting of the UR5 cobot, the SCARA i4 - 550L industrial robot, and the Omron LD90 autonomous mobile robot, was presented, and an additional robotic line was briefly discussed.
2. Methodology-Image Processing Algorithms
The first step before the image analysis is the camera calibration process to remove image distortions introduced by the optical path. For this purpose, a function
cv.calibrateCamera() available from the OpenCV [
7,
61,
62] libraries was used. For calibration, a chessboard-type calibration board was provided with the following dimensions: for the USB camera – 270×240mm for the Mako camera – 150×150mm. Below (Error! Reference source not found. – Figure 1), the determined values of reprojection errors for individual calibration photos are presented. On this basis, photos that did not meet the assumed reprojection error threshold were rejected, and the matrices of internal and external parameters were recalculated. The determined matrices, camera, and distortion vector values for both cameras are presented below (Error! Reference source not found.. – Error! Reference source not found..).
Table 1.
Mako camera – values of the estimated camera matrices and distortion vector (radial and tangential).
Table 1.
Mako camera – values of the estimated camera matrices and distortion vector (radial and tangential).
Distortion Coefficients = [-0.27496, 4.08649, -0.00045, -0.00277, -0.00043] |
(1) |
Camera Matrix = |
(2) |
Table 2.
Savio camera – values of estimated camera matrices and distortion vector (radial and tangential).
Table 2.
Savio camera – values of estimated camera matrices and distortion vector (radial and tangential).
Distortion Coefficients = [-0.37186, 0.13426, -0.00247, -0.0007, -0.02940] |
(3) |
Camera Matrix = |
(4) |
Figure 1.
Graph of calculated reprojection errors for a USB camera.
Figure 1.
Graph of calculated reprojection errors for a USB camera.
Figure 1.
Graph of calculated reprojection errors for the Mako camera.
Figure 1.
Graph of calculated reprojection errors for the Mako camera.
While the station is operating, two vision sequences (algorithms) are performed; the first involves determining the position and orientation of details placed in the cobot’s picking zone. The latter deals with the detection of details and basic measurements of quantities such as width, height, diameter (in the case of circular objects), and surface area, taking into account the internal contours of the details. For the developed algorithms a robot-camera calibration process was necessary. As a result of the calibration procedure, a homographic transformation is obtained, mapping the system from the camera’s image plane to the coordinate system associated with the robot. Below is a schematic diagram of the calibration process (Error! Reference source not found.).
Figure 3.
Sequence diagram for determining the homography matrix.
Figure 3.
Sequence diagram for determining the homography matrix.
In the case of both vision sequences, the process of image preprocessing is similar (image conversion to grayscale, thresholding, detection of contours in the binary image). The diagram below (Error! Reference source not found..) shows the subsequent steps of the algorithm implementing the process of determining position and orientation and measuring fundamental geometric quantities. The results of vision measurements are carried out using sub-pixel methods. The conversion of the obtained values to SI units mm is the final process of a given sequence.
Figure 4.
Developed algorithm implementing vision measurements based on openCV libraries [
7,
61,
62].
Figure 4.
Developed algorithm implementing vision measurements based on openCV libraries [
7,
61,
62].
(The first vision algorithm) Vision sequence responsible for determining position and orientation:
- ➢
Step 1 – image preprocessing before contour detection; include conversion to grayscale in the case of a color image, smoothing using a Gaussian filter, thresholding, and bit negation for contour detection,
- ➢
Step 2 – detection of external contours in a binary image using the cv.findContours() function (topological structural analysis is applied to detect contours),
- ➢
➢ Step 3 – elimination of contours that do not meet the adopted parameters for the extracted image features,
- ➢
➢ Step 4 – comparison of extracted contours with a defined pattern using Hu-moments (cv.matchShapes()),
- ➢
➢ Step 5 – determination of the centroid and orientation angle based on first- and second-order moment methods,
Below is an example of the operation of the described video sequence( Error! Reference source not found.).
Figure 5.
The result of the vision sequence determines the position and orientation of the detail, with the center point and the X (red) and Y (green) axes marked.
Figure 5.
The result of the vision sequence determines the position and orientation of the detail, with the center point and the X (red) and Y (green) axes marked.
(The second vision algorithm) The vision sequence responsible for measuring the basic geometric features of details:
➢ Step 1 – image preprocessing (conversion to grayscale, image smoothing - Gaussian filtering, thresholding, bit negation) and detection of external contours using the cv.findContours() function,
➢ Step 2 – detection of internal contours in a group of areas separated from the image in the previous operation (Error! Reference source not found..),
Figure 6.
The result of the measuring tool, with internal contours (holes) and a rectangular area covering the external contour of the object (representing the overall dimensions: width, height).
Figure 6.
The result of the measuring tool, with internal contours (holes) and a rectangular area covering the external contour of the object (representing the overall dimensions: width, height).
➢ Step 3 – determination of the basic values of the detected geometric features,
➢ Step 4 – conversion of obtained sizes expressed in pixel units into SI units in mm,
3. System Architecture
The designed and constructed robotic stand is equipped with:
Cobot UR5 CB2 with the piCOBOT vacuum ejector from Piab,
Mako G-125B camera (CCD, 1292×964) with a 16mm Computar lens,
Savio CAK-01 USB webcam (CMOS, 1920×1080),
Illumination system (backlight) for the Mako camera. In the case of a camera placed on the robot, the use of lighting installed in the room was limited.
When comparing the parameters of the stationary cameras, it was decided to use the Mako G-125 B industrial camera to carry out the measurement sequence. This camera was permanently attached to the supporting structure of the station, while the second camera, Savio CAK-01, was mounted to the robot’s flange. The arrangement of individual components of the station is depicted in Error! Reference source not found..
The second camera affixed on the robot’s flange was connected directly to the PC (by a USB cable). Considering that the Mako camera supports the PoE network function, it was decided to use the Pulsar s54 - PoE network switch and connect the remaining architectural elements in a star topology. Below is a detailed diagram of the described communication model (Error! Reference source not found..) and a view of the actual cobot’s station, with its essential elements marked (Error! Reference source not found..).
Figure 7.
Communication diagram of the robotic station.
Figure 7.
Communication diagram of the robotic station.
Figure 8.
Layout of the cobot station 1 – UR5 CB2 cobot, 2 – illuminator (backlight lighting), 3 – Savio CAK-01 USB webcam camera, 4 – Mako G-125B camera, 5 – container where defective details are rejected, 6 – space warehouse, 7 – piCOBOT vacuum ejector from Piab.
Figure 8.
Layout of the cobot station 1 – UR5 CB2 cobot, 2 – illuminator (backlight lighting), 3 – Savio CAK-01 USB webcam camera, 4 – Mako G-125B camera, 5 – container where defective details are rejected, 6 – space warehouse, 7 – piCOBOT vacuum ejector from Piab.
In the software layer, for communication at the PC←→UR5 level, the Primary/Secondary and Real-Time Interface interfaces were used (provided by the server implemented in the robot controller). The server supports the URScript interpreter and broadcasts basic data about the robot’s state (position of connectors , supply voltage of individual drives, etc.) (Error! Reference source not found.., Error! Reference source not found..).
For the purposes of communication and control of the cobot, proprietary libraries were developed in Python, based on the URScript language. For instance, functions have been implemented to control the movement of the cobot, enabling the execution of the manipulator’s movement in joint and linear interpolation (equivalents of the MoveJ and MoveL commands). In the case of PC←→Mako G-125B communication, the Python programming interface (API) provided by Allied Vision as part of the Vimba software was applied (Vimba Python API). In the case of PC←→Savio CAK-01 connection, the mechanism for capturing system USB interfaces available in the OpenCV libraries was employed.
Figure 9.
Schematic diagram showing the available communication interfaces of Universal Robots [
59].
Figure 9.
Schematic diagram showing the available communication interfaces of Universal Robots [
59].
During the station’s operation, the cameras’ image acquisition processes and the cobot’s data exchange processes must work independently of the main program loop, which is why they are launched as independent threads.
5. Experimental Setup and Results
The chapter presents an analysis of the results of a series of measurements carried out to determine the level of measurement noise, accuracy, and repeatability of the developed vision system.
An object (Error! Reference source not found.., Error! Reference source not found..) with an irregular shape, made using 3D printing technology, was designed to carry out a series of measurements. The measurement sequence’s first stage involves taking 100 photos (in the optimal position of the camera’s field of view, which is the center of the optical axis), in identical lighting conditions and system configuration. In the next step, the location and diameter of the internal holes (1-4) (Error! Reference source not found..) and the external diameter (5) (Error! Reference source not found..) of the prepared detail are computed. A series of measurements were carried out for the case without and using backlight lighting.
Figure 3.
Features of the detail analyzed during measurements 1-4 hole diameters, 5 - outer diameter of the detail.
Figure 3.
Features of the detail analyzed during measurements 1-4 hole diameters, 5 - outer diameter of the detail.
Figure 4.
Developed measuring detail with marked dimensions.The obtained results are presented below (Error! Reference source not found..)(Error! Reference source not found. – Error! Reference source not found..), including a comparison of the following values: standard deviation, standard uncertainty, relative error, absolute error, and mean value.
Figure 4.
Developed measuring detail with marked dimensions.The obtained results are presented below (Error! Reference source not found..)(Error! Reference source not found. – Error! Reference source not found..), including a comparison of the following values: standard deviation, standard uncertainty, relative error, absolute error, and mean value.
Table 3.
Comparison of the results obtained for measurements without and with backlight.
Table 3.
Comparison of the results obtained for measurements without and with backlight.
Without backlight lighting |
Feature |
Mean value [mm] |
Standard deviation [mm] |
Standard uncertainty [mm] |
Average absolute error [mm] |
Medium relative error [%] |
1 |
9.66331 |
0.01419 |
0.00142 |
0.33669 |
3.36700 |
2 |
9.72947 |
0.01649 |
0.00165 |
0.27053 |
2.70500 |
3 |
9.60860 |
0.01471 |
0.00147 |
0.39140 |
3.91400 |
4 |
9.68782 |
0.01498 |
0.00150 |
0.31218 |
3.12200 |
5 |
100.17344 |
0.01559 |
0.00156 |
0.17344 |
0.17300 |
With backlight lighting |
1 |
10.03240 |
0.01141 |
0.00114 |
0.03240 |
0.32400 |
2 |
10.07091 |
0.01023 |
0.00102 |
0.07091 |
0.70900 |
3 |
9.97986 |
0.01084 |
0.00108 |
0.02014 |
0.20100 |
4 |
10.06637 |
0.00866 |
0.00087 |
0.06637 |
0.66400 |
5 |
99.90848 |
0.00992 |
0.00099 |
0.09152 |
0.09200 |
Figure 5.
Comparison of results for measurements without lighting and with backlight lighting - hole diameters - 1-4.
Figure 5.
Comparison of results for measurements without lighting and with backlight lighting - hole diameters - 1-4.
Figure 6.
Comparison of results for measurements without lighting and with backlight lighting - hole diameter - 5.
Figure 6.
Comparison of results for measurements without lighting and with backlight lighting - hole diameter - 5.
Figure 7.
Comparison of results for measurements without lighting and with backlight lighting - hole diameter - 3.
Figure 7.
Comparison of results for measurements without lighting and with backlight lighting - hole diameter - 3.
Figure 8.
Comparison of results for measurements without lighting and with backlight lighting - outer diameter of the detail - 5.
Figure 8.
Comparison of results for measurements without lighting and with backlight lighting - outer diameter of the detail - 5.
The next series of measurements concerns determining the accuracy and repeatability of the developed vision system. The collected data will also make it possible to draw a map of measurement error variability, the analysis of which will allow determining the camera’s image space where the measurement accuracy is the highest.
The described measurement sequence involves taking a series of 50 photos, in each of 40 positions, in the camera’s field of view (the photos were taken for the case without and with backlight lighting). The next step is, similarly to the first measurement sequence, determining the basic geometric dimensions of the detail(Error! Reference source not found.., Error! Reference source not found..) The results are presented below (Error! Reference source not found.. – Error! Reference source not found..), (Error! Reference source not found.. – Error! Reference source not found..) including maps standard deviation.
Table 4.
Outer diameter and inner diameter measurement results (without backlight lighting).
Table 4.
Outer diameter and inner diameter measurement results (without backlight lighting).
Feature |
Mean value [mm] |
Average absolute error [mm] |
Medium relative error [%] |
Standard deviation [mm] |
Standard uncertainty [mm] |
1 |
9.68237 |
0.31763 |
3.17600 |
0.04030 |
0.00637 |
2 |
9.67630 |
0.32370 |
3.23700 |
0.03887 |
0.00615 |
3 |
9.64648 |
0.35352 |
3.53500 |
0.04122 |
0.00652 |
4 |
9.69851 |
0.30149 |
3.01500 |
0.03845 |
0.00608 |
5 |
100.33039 |
0.33039 |
0.33000 |
0.34381 |
0.05436 |
Figure 9.
Hole diameter standard deviation map – 1 (without backlight lighting).
Figure 9.
Hole diameter standard deviation map – 1 (without backlight lighting).
Figure 10.
Hole diameter standard deviation map – 2 (without backlight lighting).
Figure 10.
Hole diameter standard deviation map – 2 (without backlight lighting).
Figure 11.
Hole diameter standard deviation map – 3 (without backlight lighting).
Figure 11.
Hole diameter standard deviation map – 3 (without backlight lighting).
Figure 12.
Hole diameter standard deviation map – 4 (without backlight lighting).
Figure 12.
Hole diameter standard deviation map – 4 (without backlight lighting).
Figure 13.
Map of the standard deviation of the outer diameter of the workpiece (without baklight lighting).
Figure 13.
Map of the standard deviation of the outer diameter of the workpiece (without baklight lighting).
Table 5.
Results of measurements of the external diameter of the detail and the diameters of internal holes (with backlight lighting).
Table 5.
Results of measurements of the external diameter of the detail and the diameters of internal holes (with backlight lighting).
Feature |
Mean value [mm] |
Average absolute error [mm] |
Medium relative error [%] |
Standard deviation [mm] |
Standard uncertainty [mm] |
1 |
10.01346 |
0.01346 |
0.13500 |
0.04093 |
0.00647 |
2 |
10.00943 |
0.00943 |
0.09400 |
0.04007 |
0.00634 |
3 |
9.99865 |
0.00135 |
0.01400 |
0.04293 |
0.00679 |
4 |
10.02750 |
0.02750 |
0.27500 |
0.03953 |
0.00625 |
5 |
100.10132 |
0.10132 |
0.10100 |
0.40595 |
0.06419 |
Figure 14.
Hole diameter standard deviation map – 1 (backlight lighting).
Figure 14.
Hole diameter standard deviation map – 1 (backlight lighting).
Figure 15.
Hole diameter standard deviation map – 2 (backlight lighting).
Figure 15.
Hole diameter standard deviation map – 2 (backlight lighting).
Figure 16.
Hole diameter standard deviation map – 3 (backlight lighting).
Figure 16.
Hole diameter standard deviation map – 3 (backlight lighting).
Figure 17.
Hole diameter standard deviation map – 4 (backlight lighting).
Figure 17.
Hole diameter standard deviation map – 4 (backlight lighting).
Figure 18.
Standard deviation map of workpiece outer diameter (backlight lighting).
Figure 18.
Standard deviation map of workpiece outer diameter (backlight lighting).
Analyzing the obtained results (Error! Reference source not found.. – Error! Reference source not found..) it can be seen that additional (backlight) lighting significantly improved the average accuracy and repeatability of the vision system, whereas in the case of no lighting, the average accuracy is estimated at the level of 0.325 mm, and for backlight – 0.031mm.
6. Integration of the Developed System
The chapter briefly describes the components of individual stations that make up the robotic line. The communication model between stations adjacent to the integrated Cobot station is described, and a view of the actual line with an indication of the crucial components is illustrated.
The analyzed robotic line is built on the basis of four robotic stations:
➢ Station I
-
The stand is equipped with:
OMRON LD-90 mobile robot
Scorpion 3D Stinger stereo vision system
Mitsubishi RV-2AJ stationary robot
a conveyor belt constituting a transport route between stations Ι and ΙΙ
➢ Station II
-
The stand is equipped with:
SCARA robot, OMRON i4-550L,
Basler acA1600-60gm camera
bright field lighting.
➢ Station III
-
The stand is equipped with:
➢ Station IV
A detailed description of the position is included in Chapter 4. (4. System architecture)
Within the framework of the integration of the described station with neighboring stations, in the case of the SCARA robot (i4-550L, Figure 31 and Figure 32 ‘3a’), a wired Ethernet connection between robot controllers and a connection of individual inputs/outputs of robot controllers (digital_in0, digital_out0, on the side of the UR5 robot controller) were provided. As part of the Ethernet connection, the TCP/IP protocol was employed to exchange data regarding subsequent items from the unloading section and digital inputs/outputs to manage access to the collision zone. In the case of the mobile robot, a wireless connection was deployed using the WiFi protocol through distributed WiFi I/O modules connecting the inputs/outputs of robot controllers (digital_in2, digital_out1, on the side of the UR5 robot controller). The previously mentioned enabled the synchronization of the part loading process.
Below is a schematic diagram of the developed communication model Error! Reference source not found., and a view of the actual robotic line, with descriptions of the individual components Error! Reference source not found..
Figure 19.
Communication diagram, for integration of station IV (UR5).
Figure 19.
Communication diagram, for integration of station IV (UR5).
Figure 20.
View of the actual robotic line. 1a - OMRON LD-90, 1b - Scorpion Stinger 3D, 1c - Mitsubishi RV-2AJ, 1d – conveyor belt, 2a - OMRON i4-550L, 2b - Basler acA1600-60gm camera and bright field illuminator, 3a - OMRON i4-550L, 3b – Basler acA1300-60gm camera, 3c- conveyor belt, 4a – UR5 CB2, 4b – Mako G-125B, 4c – SAVIO CAK-01, 4d – backlight.
Figure 20.
View of the actual robotic line. 1a - OMRON LD-90, 1b - Scorpion Stinger 3D, 1c - Mitsubishi RV-2AJ, 1d – conveyor belt, 2a - OMRON i4-550L, 2b - Basler acA1600-60gm camera and bright field illuminator, 3a - OMRON i4-550L, 3b – Basler acA1300-60gm camera, 3c- conveyor belt, 4a – UR5 CB2, 4b – Mako G-125B, 4c – SAVIO CAK-01, 4d – backlight.
The robotic line operation algorithm assumes the implementation of the task in two iterations of a closed work cycle, during which specific groups of details are rejected at each station (each station is responsible for analyzing one of the geometric features of the detail). As part of cooperation with neighboring stations, the designed Cobot station performed the following tasks:
transporting elements from the storage area of the third station and, after completing the visual analysis process, placing them in the local storage area.
transporting elements from the local warehouse area to the LD-90 mobile robot. From where the details go back to the beginning of the line.
handling the collision zone occurring in the area of collecting elements from the SCARA robot.