4.2. Experimental Results
To select the most suitable model for agricultural image segmentation, TransUNet, CNNTransUNet, UNet, UNet++, and ATT-UNet models were compared and analyzed. The core metric for model performance evaluation was the test cross-union ratio (IOU). According to the document, ATT-UNet showed superior performance over the UNet and UNet++ models in various datasets, especially in processing sugar beet datasets.
ATT-UNet achieved a performance improvement of 1.97% and 2.05% over the UNet and UNet++ models, respectively, in the sugar beet dataset. This result suggests that ATT-UNet can realize more accurate segmentation in image processing with complex backgrounds, despite the UNet and UNet++ already being effective models.
In the case of the rice dataset, where data quantity was small and the rice larvae and weeds had similar color characteristics, UNet was more suitable than UNet++ due to UNet's simpler network structure and fewer parameters, making learning and convergence easier
Table 5.
ATT-UNet showed a performance improvement of 2.38% and 7.58% over the UNet and UNet++ models, respectively, in the pea dataset. This significant performance advantage demonstrates ATT-UNet's ability to effectively handle complex scenes <
Table 6>.
TransUNet and CNNTransUNet models are more suitable when the dataset is relatively small, with CNNTransUNet showing better performance than TransUNet in cases of large datasets. When comparing all five models comprehensively, although the background IOU values were similar across the models, ATT-UNet demonstrated superior stability and accuracy in complex environments. This indicates that the ATT-UNet algorithm not only performs well in large datasets but also in small datasets. Therefore, ATT-UNet is a highly suitable model for agricultural image segmentation, particularly in scenarios that require handling complex backgrounds and subtle differences.
Table 4.
IoU Values for Three Training Models of Sugar Beet.
Table 4.
IoU Values for Three Training Models of Sugar Beet.
Model |
IOU |
Background |
Weed |
Beet |
UNet |
89.83% |
95.29% |
95.71% |
98.90% |
UNet++ |
89.75% |
95.30% |
95.71% |
98.91% |
ATT-UNet |
91.80% |
95.29% |
95.73% |
98.97% |
TransUNet |
61.11% |
66.94% |
67.22% |
77.75% |
CNNTransUNet |
68.18% |
66.88% |
56.08% |
80.27% |
Table 5.
IoU Values for Three Training Models of Rice.
Table 5.
IoU Values for Three Training Models of Rice.
Model |
IOU |
Background |
Weed |
Rice |
UNet |
73.88% |
82.90% |
86.25% |
89.68% |
UNet++ |
74.90% |
82.89% |
86.26% |
89.37% |
ATT-UNet |
76.38% |
82.91% |
86.21% |
89.50% |
TransUNet |
77.09% |
76.32% |
76.44% |
77.75% |
CNNTransUNet |
76.91% |
75.25% |
76.25% |
77.58% |
Table 6.
Table 6. IoU Values for Three Training Models of Pea.
Table 6.
Table 6. IoU Values for Three Training Models of Pea.
Model |
IOU |
Background |
Weed |
Pea |
UNet |
81.17% |
85.36% |
90.43% |
91.34% |
UNet++ |
76.52% |
85.32% |
90.40% |
91.38% |
ATT-UNet |
84.10% |
85.36% |
90.37% |
91.49% |
TransUNet |
81.56% |
68.77% |
69.87% |
93.26% |
CNNTransUNet |
81.20% |
68.69% |
69.56% |
92.85% |
Figure 8.
Rice Experiment Results, (a) few weeds and many rice plants; (b) many weeds and few rice plants; (c) both weeds and rice plants are abundant.
Figure 8.
Rice Experiment Results, (a) few weeds and many rice plants; (b) many weeds and few rice plants; (c) both weeds and rice plants are abundant.
Figure 9.
Sugar Beet Experiment Results, (a) only weeds present; (b) only sugar beets present; (c) both sugar beets and weeds present.
Figure 9.
Sugar Beet Experiment Results, (a) only weeds present; (b) only sugar beets present; (c) both sugar beets and weeds present.
Figure 10.
Pea Experiment Results, (a) many peas and few weeds; (b) many weeds and few peas; (c) both peas and weeds are abundant.
Figure 10.
Pea Experiment Results, (a) many peas and few weeds; (b) many weeds and few peas; (c) both peas and weeds are abundant.
The results of the ATT-UNet algorithm were compared with those of the TransUNet, CNNTransUNet, UNet, and UNet++ algorithms. The images were presented from left to right as follows: the original image, the ground truth, the segmentation results of UNet, UNet++, ATT-UNet, TransUNet, and CNNTransUNet, respectively. In the sugar beet dataset, the background was marked in black, sugar beets in green, and weeds in red. For the rice dataset, the background was black, weeds green, and rice red. In the pea dataset, the background was black, weeds red, and peas green. We transformed the segmentation results into RGB format for visualization. The results were outputted in three folders, categorized as background, crop, and weed. The experiment showed that the ATT-UNet algorithm had fewer false pixels and was more accurate in segmenting crop and weed images compared to other algorithms.
A series of detailed experiments were designed to comprehensively evaluate the performance of various models in agricultural image segmentation, with a focus on their ability to distinguish different crops such as rice, weeds, sugar beets, and peas. These experiments simulated various real farm environments to deeply understand how models segment under different conditions.
For clarity and ease of analysis of the experiment results, each segmentation result was stored in different folders according to the category. In the sugar beet process, particular attention was paid to the excellent segmentation effect shown by the ATT-UNet model. Compared to other models, ATT-UNet demonstrated fewer pixel errors and higher segmentation accuracy, especially in complex scenes with dense weeds and crops.
These experimental results prove that the ATT-UNet model not only excels in precise segmentation of crops and weeds but also possesses a robust capability to process agricultural images of various types and complexities. The application of this high-precision segmentation technology is expected to bring significant advantages to future agricultural robots and intelligent agricultural systems, realizing more accurate and efficient crop management and weed control, thus enhancing the overall efficiency and sustainability of agricultural production.