Preprint
Article

Iris Recognition System using Advanced Segmentation Techniques and Fuzzy Clustering Methods for Robot Control

This version is not peer-reviewed.

Submitted:

22 September 2024

Posted:

24 September 2024

You are already at the latest version

A peer-reviewed article of this preprint also exists.

Abstract
The idea of developing a robot controlled by iris movement to assist physically disabled individuals is indeed innovative and has the potential to significantly improve their quality of life. This technology can empower individuals with limited mobility and enhance their ability to interact with their environment. Disability of movement has huge impact on the life of physically disabled people. Therefore, there is need to develop a robot which can be controlled using iris movement. The main idea of this work revolves around iris recognition from an eye image, specifically identifying the centroid of the iris. The centroid's position is then utilized to issue commands to control the robot. This innovative approach leverages iris movement as a means of communication and control, offering a potential breakthrough in assisting individuals with physical disabilities. The proposed method aims to improve the precision and effectiveness of iris recognition by incorporating advanced segmentation techniques and fuzzy clustering methods. The fast gradient filters using a fuzzy inference system (FIS) is employed to separate the iris from its surroundings. Then, the bald eagle search (BES) algorithm is employed to locate and isolate the iris region. Subsequently, the Fuzzy KNN algorithm is applied for the matching process. This combined methodology aims to improve the overall performance of iris recognition systems by leveraging advanced segmentation, search, and classification techniques. The results of the proposed model are validated using True Success Rate (TSR) and compared to other existing models. The results highlight the effectiveness of the proposed method for the 400 tested images representing 40 people.
Keywords: 
Subject: 
Computer Science and Mathematics  -   Computer Vision and Graphics

1. Introduction

The concept of a robot controlled by iris movement to aid individuals with physical disabilities is an innovative concept with the potential for substantial improvements in their overall quality of life. This technology has the capacity to offer greater autonomy and enhanced interaction with the environment, thereby addressing specific challenges faced by those with mobility limitations.
This technology has the potential to empower individuals facing limited mobility challenges, significantly improving their capacity to engage with and navigate their surroundings. The impact of movement disabilities on the lives of physically disabled individuals is substantial, and this innovative technology aims to address these challenges by providing a means for more independent and effective interaction with the environment. Hence, there is need to develop a robot which can be controlled using iris movement.
The iris stands out as one of the safest and most accurate biometric authentication methods. Unlike external features like hands and face, the iris is an internal organ, shielded and, as a result, less susceptible to damage [5,6,7]. This inherent protection contributes to the reliability and durability of iris-based authentication systems.
In their work, the authors [8] have proposed a new method for iris recognition using the Scale Invariant Feature Transformation (SIFT). The initial step involves extracting the SIFT characteristic feature, followed by a matching process between two images. This matching is executed by comparing associated descriptors at each local extremum. Experimental results conducted using the BioSec multimodal database, display that the integration of SIFT with a matching approach yields significantly superior performance compared to various existing methods.
An alternative model discussed in [9] is the Speeded up Robust Features (SURF). The proposed model in [9] suggests the potential advantages of SURF in the context of iris recognition or other related applications. This model introduces a method that emphasizes efficient and robust feature extraction. SURF enhances the speed of feature detection and description, making it particularly suitable for applications with real-time constraints. This model extracts unique features from annular iris images, which results in satisfactory recognition rates.
In their work, the authors [10] have developed a system for iris recognition using a SURF key extracted from normalized and enhanced iris images. This meticulous process is designed to yield high accuracy in iris recognition, displaying the effectiveness of employing SURF-based techniques in enhancing the precision of biometric systems.
With the same objective, Masek [11] utilized the Canny edge detector and the circular Hough transform to effectively detect iris boundaries. The feature extraction process involves Log-Gabor wavelets, and recognition is achieved through the application of Hamming distance. This approach highlights the integration of various image-processing techniques to enhance iris recognition accuracy [11].
In [12], the authors introduced an iris recognition system that focuses on characterizing local variations within image structures. The approach involves constructing a one-dimensional (1D) intensity signal, capturing essential local variations from the original 2D iris image. Additionally, Gaussian–Hermite moments of intensity signals are employed as distinctive features. The classification process utilizes the cosine similarity measure and a nearest Centre classifier. This work demonstrates a systematic approach to iris recognition by emphasizing local image variations and employing specific features for accurate classification.
Recent research papers have studied the integration of machine learning techniques for iris recognition. In [13], the authors present a model that uses artificial neural networks for personal identification purposes. Another work found in [14], applied neural networks for iris recognition. This method involves the extraction, normalization, and enhancement of eyes from images. The process is then applied across numerous images within a dataset, employing a neural network for iris classification. These approaches display the increasing utilization of machine learning in advancing the accuracy and efficiency of iris recognition systems.
The control cycle of a robot based on iris recognition involves specific steps tailored to the task of identifying individuals using their iris patterns.
As shown in Figure 1, the process begins with capturing images of individuals’ faces using a camera system equipped with appropriate optics for iris imaging. This step is crucial to obtain clear and high-resolution images of the iris. Once the images are captured, the robot’s software analyzes them to locate the regions of interest corresponding to the irises within the images. This localization step involves detecting the circular shape of the iris and isolating it from the rest of the eye.
After localizing the iris region, the robot’s software segments the iris pattern from the surrounding structures such as eyelids and eyelashes. This segmentation process ensures that only the iris pattern is considered for recognition, improving accuracy. The robot’s software compares the extracted iris pattern with the templates in the database using similarity metrics or pattern-matching algorithms. Based on the degree of similarity or a predefined threshold, the system makes a decision regarding the identity of the individual. Once the closest-matched template is identified, the associated gaze direction is used to estimate the user’s current gaze direction.
The estimated gaze direction can be represented numerically (e.g., angles relative to the camera’s orientation) or categorically as shown in Figure 2, (e.g., “left,” “right,” “up,” “down,” “center”). Depending on the application, additional processing or calibration may be necessary to translate the estimated gaze direction into meaningful actions or commands for the robot or system being controlled. In addition, throughout the process, the robot provides feedback to the user, indicating whether the iris recognition was successful or if any errors occurred. In case of errors or unsuccessful recognition attempts, the system may prompt the user to retry or seek alternative authentication methods.
In our proposed model, the iris recognition system is structured around two principal stages: “Iris Segmentation and Localization” and “Feature Classification and Extraction” [16]. This systematic approach ensures a comprehensive process that involves accurately identifying and isolating the iris in the initial stage, followed by the extraction and classification of relevant features to facilitate robust and precise iris recognition.
The proposed approach in this work diverges from traditional methods by introducing a novel concept that combines advanced segmentation, search, and classification techniques for iris recognition. The proposed method aims to exhaustively explore various solutions by integrating these techniques for improved iris recognition performance. The iris localization is achieved through the application of fast gradient filters utilizing a fuzzy inference system (FIS). This process involves employing rapid gradient-based filtering techniques in conjunction with a fuzzy inference system to accurately identify and extract the iris region from its background. Then, the process of iris segmentation is performed utilizing the bald eagle search (BES) algorithm. This involves employing the BES algorithm to efficiently locate and delineate the boundaries of the iris region within an image, ensuring accurate segmentation for subsequent analysis and recognition tasks.
The initial step employs fast gradient filters using a fuzzy inference system is used to segment the iris into two distinct classes. Then, the bald eagle search (BES) algorithm is applied to identify and extract the iris region from its background. Subsequently, feature extraction is performed using the integration of DWT and PCA. Finally, the classification is carried out using the Fuzzy KNN classifier.
Section 2 discusses the proposed iris recognition model. The results are represented in Section 3. Section 4 concludes the paper.

2. The Proposed Method

Iris recognition is a biometric technique used to identify individuals based on the unique features of their iris. It offers an automated method for authentication and identification by analyzing patterns within the iris. This process involves capturing images or video recordings of one or both irises using cameras or specialized iris scanners. Mathematical pattern recognition techniques are then applied to extract and analyze the intricate patterns present in the iris.
In this work, we are interested in identifying people by their iris. The proposed system is conceptually different and explores new strategies. Specifically, it explores the potential of combining advanced segmentation techniques and fuzzy clustering methods. This unconventional method offers a fresh perspective on iris recognition, aiming to enhance accuracy and efficiency through innovative algorithmic integration rather than incremental design improvements.
The proposed iris recognition method and finding the centroid localization as shown in Figure 3, is developed using four fundamental steps: (1) Localization, (2) segmentation, (3) iris matching/classification, and (4) Finding centroid Location [16].
The iris localization step involves detecting the iris region within the human image. Following this, the images are segmented into two classes: iris and non-iris. The feature extraction phase is pivotal in the recognition cycle, where feature vectors are extracted for each identified iris from the segmentation phase. Accurate feature extraction is crucial for achieving precise results in subsequent steps.
Through the application of fast gradient filters utilizing a fuzzy inference system (FIS), iris localization algorithms can accurately identify the edges of the iris while effectively distinguishing them from surrounding structures such as eyelids. This capability enables precise segmentation of the iris, laying the foundation for subsequent processing steps in the iris recognition pipeline. By ensuring accurate segmentation, these algorithms enhance match accuracy and overall performance of iris recognition systems. Then, the process of iris segmentation is accomplished through the utilization of the Bald Eagle Search (BES) algorithm. This algorithm efficiently locates and delineates the boundaries of the iris within an image, ensuring accurate segmentation. By employing the BES algorithm, the iris region can be accurately isolated from its surrounding structures, enabling further analysis and recognition tasks in the iris recognition pipeline with enhanced precision and reliability.
In the iris recognition system, feature extraction is paramount for achieving high recognition rates and reducing classification time. The efficiency of the feature extraction technique significantly impacts the success of recognition and classification tasks on iris templates. This study investigates the integration of Discrete Wavelet Transform (DWT) and Principal Component Analysis (PCA) for feature extraction from iris images.
The proposed technique aims to generate iris templates with reduced resolution and runtime, optimizing the classification process. Initially, DWT is applied to the normalized iris image to extract features. DWT enables the capture and representation of essential image characteristics in a multi-resolution framework, facilitating efficient feature extraction.
By leveraging DWT and PCA, the proposed method enhances the effectiveness of feature extraction from iris images, resulting in improved recognition accuracy and reduced computational overhead. The integration of these techniques enables the generation of compact yet informative iris templates, contributing to the overall performance enhancement of the iris recognition system.
During the recognition phase, matching and classification of image features are conducted based on Fuzzy KNN algorithm to determine the identity of an iris in comparison to all template iris databases. This comprehensive process ensures that iris recognition is performed accurately and effectively, providing reliable results for identity verification or authentication purposes.
After the iris recognition step and the detection of iris boundaries, the localization of the center of the iris is a crucial step, especially for estimating gaze direction. Once the center of the iris is accurately localized, it can serve as a reference point for determining the direction in which a person is looking. This information is then used to control the robot in the desired direction. The flowchart shown in Figure 4 depicts the stages of the proposed iris recognition.

2.1. Iris Localization through Fast Gradient Filters Using a Fuzzy Inference System (FIS)

Iris localization is indeed a critical step in iris recognition systems as it significantly impacts match accuracy. This step primarily involves identifying the borders of the iris, including the inner and outer edges of the iris as well as the upper and lower eyelids.
Detecting edges in digital images through fast gradient filters using a fuzzy inference system (FIS) involves combining traditional gradient-based edge detection techniques with fuzzy logic to improve edge detection performance, particularly in noisy or complex image environments [17].
Fast gradient filters compute the gradient of the image intensity to identify edges. They are commonly used due to their simplicity and effectiveness. The second derivative G of an image I is typically calculated using convolution with kernel K
G = I K
The Laplacian operator is utilized for edge detection and image enhancement tasks. The Laplacian edge detector uses only one kernel. It calculates second order derivatives in a single pass. Two commonly used small kernels are:
K = 0 1 0           1 4 1         0 1 0         K = 1 1 1           1 8 1         1 1 1
In this work, we’ll focus on adjusting a single parameter, such as the threshold used in edge detection, based on the fuzzified input of gradient magnitude. A fuzzy inference system (FIS) is utilized to adaptively adjust parameters or thresholds of the fast gradient filters based on fuzzy rules and input variables. The four main components of the fuzzy inference system (FIS) are depicted in the algorithm 1.
Preprints 118946 i001Preprints 118946 i002
By applying fast gradient filters using a fuzzy inference system (FIS), iris localization algorithms can effectively identify the edges of the iris and differentiate them from surrounding structures such as eyelids. This facilitates accurate segmentation and subsequent processing steps in the iris recognition pipeline, ultimately improving match accuracy. Figure 5 presents the localization of the iris through the fast gradient filters using a fuzzy inference system (FIS) and the reference edge detection.

2.2. Iris Segmentation Using Bald Eagle Search (BES) Algorithm

A novel method for iris segmentation using a bald eagle search (BES) algorithm [18] is used for iris segmentation. This approach consists of three main stages: Select, Search, and Swoop. During the select phase, the algorithm thoroughly explores the entirety of the available search space in order to locate potential solutions, while the Search phase focuses on exploiting the selected area, and the Swoop phase targets the identification of the best solution.
In the select stage, bald eagles identify and select the best area within the selected search space where they can hunt for prey. This behavior is expressed mathematically through Equation (1).
P   i , n e w =   P b e s t + r (   P m e a n P i )
where i is the total number of search agents, is the parameter for controlling the changes in position that takes a value between 1.5 and 2 , and r is a random number that takes a value between 0 and 1 .
In the selection stage, an area is selected based on the available information from the previous stage. Another search area is randomly selected that differs from but is located near the previous search area.   P b e s t   denotes the search space that is currently selected by bald eagles based on the best position identified during their previous search. The eagles randomly search all points near the previously selected search space.
The current movement of bald eagles is calculated by multiplying the randomly explored prior information by the factor α . This procedure introduces random changes to all search points.
  P m e a n   indicates that all information from the previous points has been used. The current movement is determined by multiplying the randomly searched prior information by α. This process randomly changes all search points.
In the search stage, the search process is within the selected search space and moves in different directions within a spiral space to accelerate their search. The best position for the swoop is mathematically expressed in Eq. (7).
P i ,   n e w =   P i + y i   P i P i + 1 + x i   P i P m e a n
x i = x r ( i ) m a x ( x r )
y i = y r ( i ) m a x ( y r )
x r i = r i s i n ( θ ( i ) )
y r i = r i c o s ( θ ( i ) )
θ i = a π r a n d
r i = θ i + R r a n d
where a is a parameter that takes a value between 5 and 10 for determining the corner between point search in the central point and R takes a value between 0.5 and 2 for determining the number of search cycles. r a n d have values in between 0 and 1.
During the swooping stage, bald eagles swing from the optimal position within the search space to their target prey. Additionally, all points within the space also converge towards the optimal point. Equation (3) provides a mathematical representation of this behavior.
P i ,   n e w =   r a n d P b e s t + x 1 i   P i c 1 P m e a n + y 1 i (   P i c 2 P i ,   n e w =   r a n d P b e s t + x 1 i   P i c 1 P m e a n + y 1 i   P i c 2 P b e s t
x 1 i = x r ( i ) m a x ( x r )
y 1 i = y r ( i ) m a x ( y r )
x r i = r i s i n h ( θ ( i ) )
y r i = r i c o s h ( θ ( i ) )
θ i = a π r a n d
r i = θ i
where c 1 , c 2 are the algorithmic numbers having values in the range [ 1,2 ] . Lastly, the final solutions in P are reported as the final population and the best solution obtained in the population for solving the problem.

2.3. Feature Extraction and Matching

Feature extraction is the most important and critical part of the iris recognition system. The successful recognition rate and reduction of classification time of two iris templates mostly depend on efficient feature extraction technique. This work explores the integration of DWT and PCA for feature extraction from images [19].
In this section, the proposed technique produces an iris template with reduced resolution and runtime for classifying the iris templates. To produce the template, firstly DWT has been applied to the normalized iris image. Feature extraction using Discrete Wavelet Transform (DWT) is used in this work to capture and represent important characteristics of images in a multi-resolution framework.
The first step involves decomposing the image into four-frequency sub-bands, namely LL (low-low), LH (low-high), HL (high-low), and HH (high-high) using DWT. DWT achieves this by passing the signal through a series of low-pass and high-pass filters, followed by down sampling.
The LL sub-band represents the feature or characteristics of the iris, so that this sub-band can be considered for further processing.
Figure 5(a) shows the original iris image is (256 × 256). After applying DWT on a normalized iris image, the resolution of LL sub-band is (128 × 128). LL sub-band represents the lower resolution approximation iris with required feature or characteristics, as this sub-band has been used instead of the original normalized iris data for further processing using PCA. As the resolution of the iris template has been reduced, so the runtime of the classification will be similarly reduced.
PCA has found the most discriminating information presented in LL sub-band to form
feature matrix
PCA has found the most discriminating information presented in LL sub-band to form
feature matrix
In the second step, PCA has found the most discriminating information presented in LL sub-band to form feature matrix, and the resultant feature matrix has been passed into the classifier for recognition. The mathematical analysis of PCA includes:
The mean of each vector of the matrix LL of size ( N × M ) is given in the equation:
x m = 1 N k = 1 N x k  
The mean is subtracted from all of the vectors to produce a set of zero mean vectors is given in the equation.
x z = x i x m  
where x z is the zero mean vectors, x i is each element of the column vector, x m is the mean of each column vector.
The Covariance matrix is computed using the equation:
C = x z T x z
The Eigenvectors and Eigenvalues are computed using the equation:
C γ i e = 0
where γ s are the Eigenvalue and e s are the Eigenvectors.
Each of an Eigenvectors is multiplied with zero mean vectors x z to form the feature vector. The feature vector is given by the equation:
f i = x z e
In iris recognition, a similarity measure is utilized to quantify the resemblance between two iris patterns based on the features extracted from them. Various techniques are employed for comparing irises, each with its own advantages and limitations. For the purpose of classification, Fuzzy K-nearest neighbour algorithm seems to be an intriguing approach for iris recognition.
The Fuzzy KNN algorithm revolves around the principle of membership assignment [20]. Similar to the classic KNN algorithm, the variant proceeds to find the k nearest neighbours of a testing dataset from the training dataset. It then proceeds to assign “membership” values to each class found in the list of k Nearest Neighbours.
The membership values are calculated using a fuzzy math algorithm that focuses on the weight of each class. The formula for the calculation is:
μ i P = j = 1 N μ i j ( 1 d ( P i , f j ) 2 m 1 ) j = 1 N ( 1 d ( P i , f j ) 2 m 1 )
where: P is the test pattern and m = 2 .
Finally, the class with the highest membership is then selected for the classification result.

2.4. Locating the Center of the Iris

Once the iris boundary is detected, locating the center involves finding the centroid of the segmented iris region. This can be achieved using the following equations:
C x = 1 N i = 1 N x i
C y = 1 N i = 1 N y i
where ( x i , y i ) are the coordinates of the iris boundary points, and N is the total number of boundary points.
These equations provide a basic framework for iris segmentation and center localization. However, it’s important to note that implementation details may vary depending on factors such as image quality, noise levels, and specific requirements of the application. Adjustments and optimizations may be necessary to achieve optimal segmentation and center localization performance.
To select the desired direction of gaze from the iris center coordinates ( C x , C y ), the Euclidean distance is calculated between the iris center and the coordinates representing each direction of gaze (Up, Left, Middle, Right, Down, and Closed). Then, the direction with the lowest Euclidean distance is chosen as the desired direction. Samples of the coordinates representing the direction (Middle, Right and Left) are presented in Figure 6. To calculate the Euclidean distance ( E D ) between the iris center ( C x , C y ) and the coordinates representing the “Right” direction, you can use the following formula:
E D R i g h t = ( C x x R i g h t ) 2 + ( C y y R i g h t ) 2

3. Experimental Results and Discussion

To evaluate the efficiency and accuracy of the proposed iris recognition method, experiments are conducted comparing its performance against existing methods, as described earlier. These experiments are typically carried out using MATLAB software version 10, which provides a comprehensive environment for image processing, feature extraction, classification, and evaluation.
For performance analysis, the iris images in the CASIA Iris database are initially stored in gray level format, utilizing 8 bits with integer values ranging from 0 to 255. This format allows for efficient representation of grayscale intensity levels, facilitating subsequent image processing and analysis.
The CASIA Iris database is a significant dataset comprising 756 images captured from 108 distinct individuals. As one of the largest publicly available iris databases, it offers a diverse and comprehensive collection of iris images for evaluation purposes. The database encompasses variations in factors such as illumination conditions, occlusions, and pose variations, making it suitable for assessing the robustness and accuracy of iris recognition algorithms under real-world scenarios.
After performing the image segmentation detailed in Section 2.2, the homogeneous areas within each image were acquired. The BES algorithm is employed to handle a predetermined number of regions in an image (specifically, the iris and pupil) for segmentation purposes.
Figure 7 displays the segmentation results for an example image from the database. In this figure, (a) depicts the original image, while (d) illustrates its region representation. The segmentation results demonstrate that the two regions were accurately segmented through the bald eagle search (BES) algorithm.
To evaluate accuracy, a segmentation sensitivity criterion is employed to ascertain the number of correctly classified pixels. This criterion measures the ability of the segmentation algorithm to accurately identify and delineate regions of interest within the image. By comparing the segmented regions to ground truth annotations or manually labeled regions, the segmentation sensitivity criterion quantifies the accuracy of the segmentation process. This evaluation metric provides valuable insights into the performance of the segmentation algorithm, enabling researchers to assess its effectiveness and reliability in accurately partitioning images into meaningful regions.
Figure 8 shows sample iris images from the evaluation dataset, comprised of 756 images sourced from the CASIA database. These images serve as representative examples utilized in the evaluation process. For the segmentation of the test database, the computational effort required was significant. The segmentation process encompassed a total duration of 5.5 hours to process all 756 images. On average, the segmentation algorithm took approximately 1.9 seconds to process each individual image. The computational time required for segmentation is an important consideration, as it directly impacts the efficiency and feasibility of the iris recognition system and the control of the robot. While the segmentation process may be time-consuming, achieving accurate and reliable segmentation results is crucial for subsequent stages of feature extraction, matching, and classification.
The segmentation sensitivity of some existing methods FAMT [21], FSRA [22], BWOA [23] and the bald eagle search (BES) algorithm [18] is shown in Table 1. It can be seen from Table 1 that 31.77%, 20.44%, and 2.73% of pixels were incorrectly segmented by FAMT [21], FSRA [22], BWOA [23] and the bald eagle search (BES) algorithm [18], respectively.
Indeed, the experimental results indicate that the BES algorithm surpasses existing methods in terms of segmentation accuracy [24]. The optimal segmentation of the two regions is achieved through the BES algorithm.
We calculated the segmentation sensitivity as follows:
S e n % = N p c c M × N × 100
where:
N p c c : is the number of correctly classified pixels.
M × N : is the size of image.
The comprehensive analysis conducted in this study involved randomly dividing the 756 images into training and testing datasets.
The dataset is partitioned into training and testing subsets using a 4:3 ratio. Specifically, 432 images are randomly chosen for the training set, while 324 images are selected from all cases to form the test set. Within the training set, four iris images are selected for each subject to facilitate feature extraction.
Depending on the total number of images chosen, the training set may contain 108, 216, 324, or 432 images. Importantly, for each individual, irises with matching indices are chosen for both the training and testing sets to ensure consistency in the evaluation process.
To reduce the dimensionality of the training set, a subset of feature vectors is randomly selected. Feature vectors corresponding to these selected features are then utilized to construct a smaller training set. This approach aims to minimize the computational complexity required by the FKNN classifier, as the reduced feature set results in fewer operations during classification.
Let X be the original feature matrix with dimensions ( N × N ) , where N is the number of feature vectors. X 60 = { x 1 ,   x 2 ,   , x 60 } ,   X 100 = { x 1 ,   x 2 ,   , x 100 } , , and X 128 = { x 1 ,   x 2 ,   , x 128 } , be subsets of feature vectors randomly selected, each containing 60, 100, and 128 feature vectors respectively. As depicted in Figure 9, it is clear that augmenting the quantity of training images leads to an improvement in recognition accuracy. When applying the proposed method with a total of 432 training images (4 images per individual) and utilizing 128 feature vectors, Figure 9 illustrates that the recognition performance achieves a peak level of 99.3827%.
However, we used the iris recognition rate in our evaluation [25]. We calculated the iris recognition rate as follows:
I R R % = T N F T N F R T N F × 100
where:
IRR%: Face Recognition Rate.
TNF: Total Number of Faces.
TNFR: Total number of False Recognition.
Furthermore, numerical comparisons of the Equal Error Rate (EER), Iris Recognition Rate (IRR) [25], True Positive (TP), False Positive (FP), and False Negative (FN) are provided as benchmarks against various contemporary techniques in the face recognition literature.
Figure 10 presents a numerical comparison of iris recognition utilizing various methods including the Speeded Up Robust Features (SURF) method [10], the Log-Gabor wavelets and the Hamming distance method (LGWHD) [11] and the Local intensity variation method (LIV) [12], on the CASIA database. In this comparison, 57% of samples for each individual are allocated to the training set, while the remaining 43% are utilized in the test set.
Additionally, Figure 11 provides intuitive comparisons between the supervised learning based on matching features (SLMF) [26], the local invariant feature descriptor (LIFD) [27], and the Fourier-SIFT method (FSIFT) [28] in terms of Equal Error Rate (EER) and Iris Recognition Rate (IRR).
As shown in Figure 11, the proposed method achieves an Equal Error Rate (EER) of 0.1381% and an Iris Recognition Rate (IRR) of 99.3827%. Particularly, the proposed method significantly outperforms other approaches according to the numerical comparison method. Furthermore, from Table 2, it is evident that the false positive rate (FPR) is 0.6172%, indicating a higher accuracy rate achieved by the proposed method.
The computational time required for segmentation and classification is an important consideration, as it directly impacts the efficiency and feasibility of the iris recognition system and the control of the robot.
On average, the segmentation algorithm required around 1.9 seconds to process each image, and the classification algorithm took approximately 1.3 seconds for iris recognition and localization of the iris center. The computational times for both segmentation and classification are crucial for assessing the feasibility of real-time operation in robotics. The relatively short processing times observed in the experiments indicate promising potential for real-time operation. However, further optimization may be necessary to achieve even faster processing speeds, particularly for applications requiring rapid responses.
In Figure 12, the movement of the robot TurtleBot 3 is determined by the position of the center of the iris relative to different directional points. In Figure 12(a), the robot moves forward because the distance between the center of the iris and the Middle direction is minimal. This suggests that the robot perceives the iris to be centered and thus moves straight ahead. Figure 12(b) demonstrates that the robot moves right. This decision is made because the distance between the center of the iris and the Right direction is minimal, indicating that the iris is off-center to the left from the robot’s perspective. To correct this misalignment, the robot adjusts its trajectory to the right. Finally, in Figure 12(c), the robot moves left. This adjustment is made because the distance between the center of the iris and the Left direction is minimal, suggesting that the iris is off-center to the right from the robot’s viewpoint. Consequently, the robot corrects its trajectory by moving left. Consequently, the robot’s movement is guided by minimizing the distance between the center of the iris and predetermined directional points, allowing it to navigate in different directions based on the perceived position of the iris.

4. Conclusions

This work introduces a novel approach to human iris recognition, integrating advanced segmentation techniques with Fuzzy classification Algorithm. The method comprises two primary phases. In the initial phase, the fast gradient filters using a fuzzy inference system (FIS) is applied to precisely localize the iris within the original image. This crucial step, fundamental for matching accuracy, primarily focuses on identifying the outer boundaries of the iris. Subsequently, efficient segmentation of these localized regions is achieved using the bald eagle search (BES) algorithm. This segmentation process enhances the delineation of iris regions, facilitating the subsequent extraction of essential iris characteristics crucial for representation and identification.
In addition, Fuzzy KNN algorithm is applied for the matching process. This algorithm is tailored to leverage the extracted features by integrating the DWT and PCA methods, enhancing its efficacy in classifying iris patterns and ultimately enabling accurate identification of individuals. By integrating these methodologies, the proposed approach aims to achieve robust and precise iris recognition. The centroid’s position of the iris is then employed to issue commands for controlling a robot. This innovative approach harnesses iris movement as a form of communication and control, presenting a promising breakthrough in assisting individuals with physical disabilities.
The localization phase ensures accurate identification of iris boundaries, while the segmentation and feature analysis phase enable the extraction of discriminative iris features. Finally, the Fuzzy KNN algorithm enhances classification efficiency, contributing to reliable identification outcomes. The evaluation and testing of the CASIA database prove the tool’s validity and achieves its aim to recognize the human iris, which might require more attention. Furthermore, data fusion techniques will be included in future work to fuse and aggregate data from different information sources such as iris and fingerprint biometrics.

References

  1. C. Otti, “Comparison of biometric identification methods,” in Int. Symp. on Applied Computational Intelligence and Informatics (SACI), Timisoara, Romania, pp. 1–7, May 2016.
  2. Sumalatha and, A. Bhujanga Rao, “Novel method of system identification,” in Int. Conf. on Electrical, Electronics, and Optimization Techniques (ICEEOT), Chennai, India, 2016.
  3. D. Byron, A. M. Kiefer, J. Thomas, S. Patel, A. Jenkins et al., “Correction to: The authentication and repatriation of a ceremonial tsantsa to its country of origin (Ecuador),” Heritage Science, vol. 9, no. 60, pp. 1–10, 2021. [CrossRef]
  4. M. Ortega, M. G. Penedo, J. Rouco, N. Barreira and M. J. Carreira, “Retinal verification using a feature points-based biometric pattern,” EURASIP Journal on Advances in Signal Processing, vol. 9, no. 23, pp. 1–11, 2009.
  5. J. R. Malgheet, N. B. Manshor and L. S. Affendey, “Iris recognition development techniques: A compre- hensive review,” Complexity Journal, vol. 2021, pp. 1–32, 2021.
  6. J. G. Daugman, “How iris recognition works,” IEEE Trans. Circ. Syst. Video Technol., vol. 14, no. 1, pp. 21–30, 2004.
  7. F. M. Alkoot, “A review on advances in iris recognition methods,” Int. J. Comput. Eng. Res, vol. 3, no. 1, pp. 1–9, 2012. [CrossRef]
  8. F. Alonso-Fernandez, P. Tome-Gonzalez, V. Ruiz-Albacete and J. Ortega-Garcia, “Iris recognition based on SIFT features,” in Int. Conf. on Biometrics, Identity and Security (BIdS), Paris, France, 2009. [CrossRef]
  9. H. Mehrotra, P. K. Sa and B. Majhi, “Fast se’gmentation and adaptive surf descriptor for iris recognition,” Math. Comput. Model, vol. 58, no. 1–2, pp. 132–146, 2013.
  10. I. Ismail, H. S. Ali and F. A. Farag, “Efficient enhancement and matching for iris recognition using SURF,” in National Symp. on Information Technology: Towards New Smart World (NSITNSW), Riyadh, Saudi Arabia, pp. 1–5, 2015.
  11. L. Masek, “Recognition of human iris patterns for biometric identification,” The University of Western Australia, School of Computer Science and Software Engineering, vol. 4, no. 1, pp. 1–56, 2003.
  12. L. Ma, T. Tan, Y. Wang and D. Zhang, “Local intensity variation analysis for iris recognition,” Pattern Recogn., vol. 37, no. 6, pp. 1287–1298, 2004. [CrossRef]
  13. K. Saminathan, D. Chithra and T. Chakravarthy, “Pair of iris recognition for personal identification using artificial neural networks,” International Journal of Computer Science Issues (IJCSI), vol. 9, no. 1, pp. 324–327, 2012.
  14. R. Abiyev and K. Altunkaya, “Personal iris recognition using neural networks,” International Journal of Security and its Applications (IJSIA), vol. 2, no. 2, pp. 41–50, 2008.
  15. V. Vytautas, and A. Bulling, “Eye gesture recognition on portable devices”, Proceedings of the 2012 ACM Conference on Ubiquitous Computing. pp. 711-714, 2012.
  16. S. V. Sheela and P. A. Vijaya, “Iris recognition methods-survey,” Int. J. Comput. Appl., vol. 3, no. 5, pp. 19–25, 2010. [CrossRef]
  17. R. Ranjan and V. Avasthi, ‘‘Edge Detection in Digital Images Through Fast Gradient Filters using Fuzzy Inference System’’, IEEE World Conference on Applied Intelligence and Computing (AIC), 2022.
  18. N. F. Nicaire, P. N. Steve, N. E. Salome, and A. O. Grégroire, ‘‘Parameter estimation of the photovoltaic system using bald eagle search (BES) algorithm’’, International Journal of Photoenergy, pp. 1-20, 2021. [CrossRef]
  19. K. Rana, M.S. Azam, M.R. Akhtar, J.M. Quinn and M. A. Moni, ‘‘A fast iris recognition system through optimum feature extraction’’, PeerJ Computer Science, 5, e184, 2019. [CrossRef]
  20. Q. Zhang, J. Sheng, Q. Zhang, L. Wang, Z. Yang, and Y. Xin, ‘‘Enhanced Harris hawks optimization-based fuzzy k-nearest neighbor algorithm for diagnosis of Alzheimer’s disease’’, Computers in Biology and Medicine, 165, 107392, 2023. [CrossRef]
  21. P. S. Liao, T. S. Chen and P. C. Chung, “A fast algorithm for multi-level thresholding,” Journal of Information Science and Engineering, vol. 17, no. 5, pp. 713–727, 2001.
  22. S. Arora, J. Acharya, A. Verma and P. K. Panigrahi, “Multilevel thresholding for image segmentation through a fast statistical recursive algorithm,” Pattern Recognition Letters, vol. 29, pp. 119–125, 2008. [CrossRef]
  23. Huang and, C. Wang, “Optimal multi-level thresholding using a two-stage Otsu optimization approach- ScienceDirect,” Pattern Recognition Letters, vol. 30, no. 3, pp. 275–284, 2009.
  24. Z. Mbarki, H. Seddik and E. Ben Braiek, ‘‘A rapid hybrid algorithm for image restoration combining parametric Wiener filtering and wave atom transform’’, Journal of Visual Communication and Image Representation, 40, pp.694-707, 2016. [CrossRef]
  25. W. She, M. Surette and R. Khanna, “Evaluation of automated biometrics-based identification and verification systems,” Proceeding IEEE, vol. 85, no. 9, pp. 1464–1478, 1997. [CrossRef]
  26. Hernandez-Garcia, A. Martin-Gonzalez and R. Legarda-Saenz, “Iris recognition using supervised learn- ing based on matching Features,” in Int. Symp. on Intelligent Computing Systems: Intelligent Computing Systems, Universidad de Chile, Santiago, Chile, pp. 44–56, 2022.
  27. Q. Jin, X. Tong, P. Ma and S. Bo, “Iris recognition by new local invariant feature descriptor,” Journal of Computational Information Systems, vol. 9, no. 5, pp. 1943–1948, 2013.
  28. A. Kumar and, B. Majhi,, “Isometric efficient and accurate Fourier-SIFT method in iris recognition system,” in Int. Conf. on Communications and Signal Processing (ICCSP), Sharjah, United Arab Emirates, pp. 809– 813, 2013.
Figure 1. Control cycle of a robot based on iris recognition.
Figure 1. Control cycle of a robot based on iris recognition.
Preprints 118946 g001
Figure 2. Image templates for different gaze directions taken during initialization. During operation, the current eye image is matched against these templates to estimate gaze direction [15].
Figure 2. Image templates for different gaze directions taken during initialization. During operation, the current eye image is matched against these templates to estimate gaze direction [15].
Preprints 118946 g002
Figure 3. The proposed iris recognition method and finding the centroid localization, (a) Sample image of CASIA Iris-M1-S2, (b) Localization of the iris, (c) Iris segmentation, (d) Iris Detection, and (e) Finding centroid Location.
Figure 3. The proposed iris recognition method and finding the centroid localization, (a) Sample image of CASIA Iris-M1-S2, (b) Localization of the iris, (c) Iris segmentation, (d) Iris Detection, and (e) Finding centroid Location.
Preprints 118946 g003
Figure 4. The diagram of the proposed method.
Figure 4. The diagram of the proposed method.
Preprints 118946 g004
Figure 5. The localization of iris. (a) Original image (Human eye), (b) Edge detection through fast gradient filters using a fuzzy inference system (FIS), (c) The localization of iris (d) Segmented image (2 regions: Iris and background) using bald eagle search (BES) algorithm.
Figure 5. The localization of iris. (a) Original image (Human eye), (b) Edge detection through fast gradient filters using a fuzzy inference system (FIS), (c) The localization of iris (d) Segmented image (2 regions: Iris and background) using bald eagle search (BES) algorithm.
Preprints 118946 g005
Figure 6. The coordinates representing each direction, (a) the iris center coordinates ( C x , C y ), (b) the Middle direction, (c) the Right direction, and (d) the Left direction.
Figure 6. The coordinates representing each direction, (a) the iris center coordinates ( C x , C y ), (b) the Middle direction, (c) the Right direction, and (d) the Left direction.
Preprints 118946 g006
Figure 7. Image segmentation. (a) Original image. (b) Iris localization. (c) Segmented image (3 regions: Iris, Pupil, and background). (d) Segmented image (2 regions: Iris and background).
Figure 7. Image segmentation. (a) Original image. (b) Iris localization. (c) Segmented image (3 regions: Iris, Pupil, and background). (d) Segmented image (2 regions: Iris and background).
Preprints 118946 g007
Figure 8. Example of irises of the human eye. Twelve were selected for a comparison study. The patterns are numbered from 1 through 12, starting at the upper left-hand corner. Image is from CASIA iris database (Fernando Alonso-Fernandez, 2009).
Figure 8. Example of irises of the human eye. Twelve were selected for a comparison study. The patterns are numbered from 1 through 12, starting at the upper left-hand corner. Image is from CASIA iris database (Fernando Alonso-Fernandez, 2009).
Preprints 118946 g008
Figure 9. The proposed method rates of iris recognition based on the number of training images and feature vectors dimension.
Figure 9. The proposed method rates of iris recognition based on the number of training images and feature vectors dimension.
Preprints 118946 g009
Figure 10. Iris recognition evaluation results, (a) True positive, (b) False positive, (c) False negative, and (d) Iris recognition rate.
Figure 10. Iris recognition evaluation results, (a) True positive, (b) False positive, (c) False negative, and (d) Iris recognition rate.
Preprints 118946 g010
Figure 11. The recognition performance of the supervised learning based on matching features (SLMF) [26], the local invariant feature descriptor (LIFD) [27], the Fourier-SIFT method (FSIFT) [28] and the proposed method on CASIA iris database, (a) ERR (%), and (b) IRR (%).
Figure 11. The recognition performance of the supervised learning based on matching features (SLMF) [26], the local invariant feature descriptor (LIFD) [27], the Fourier-SIFT method (FSIFT) [28] and the proposed method on CASIA iris database, (a) ERR (%), and (b) IRR (%).
Preprints 118946 g011
Figure 12. The movement of the robot in different directions, (a) Moving forward, (b) Moving right, and (c) Moving left.
Figure 12. The movement of the robot in different directions, (a) Moving forward, (b) Moving right, and (c) Moving left.
Preprints 118946 g012aPreprints 118946 g012bPreprints 118946 g012c
Table 1. Segmentation sensitivity from FAMT [21], FSRA [22], BWOA [23] and BES [18] for the data set shown in Figure 8.
Table 1. Segmentation sensitivity from FAMT [21], FSRA [22], BWOA [23] and BES [18] for the data set shown in Figure 8.
FAMT FSRA BWOA BES
Image 1 97.6642 96.7067 97.7610 98.7395
Image 2 96.6461 95.6986 96.7419 97.7102
Image 3 97.7018 96.7440 97.7987 98.7775
Image 4 98.2627 97.2994 98.3601 99.5444
Image 5 95.7377 94.7991 95.8326 96.7917
Image 6 96.6490 95.7014 96.7447 97.7131
Image 7 96.5125 96.0371 96.6082 97.5751
Image 8 97.7358 97.2543 97.8327 98.8119
Image 9 98.6629 98.1769 97.7915 99.7492
Image 10 95.5424 95.0718 94.6986 96.5943
Image 11 95.6187 95.1476 94.7741 96.6714
Image 12 96.6222 96.1463 95.7688 97.6860
Table 2. The iris recognition performance of different approaches on CASIA database.
Table 2. The iris recognition performance of different approaches on CASIA database.
Methods EER (%) IRR (%)
SLMF method [26] 0.1782 98.9438
LIFD method [27] 0.4324 97.6227
FSIFT method [28] 0.2956 98.6356
Proposed iris recognition method 0.1381 99.3827
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Alerts
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2025 MDPI (Basel, Switzerland) unless otherwise stated