1. Introduction
The constant increase in requirements for the image processing and the growing complexity of the tasks to be solved have made pattern recognition one of the major trends that has persisted for decades. If at the dawn of computer processing, the task was to highlight an image on a binary field, now it is necessary not only to perform a simple semantic segmentation, but also to solve pattern recognition of 3D tasks [
1], smart city tasks [
2] or medical areas [
3], which are partially solved problems. This study examines the characteristics of the functioning of adaptive systems to assess the efficacy of diverse clustering methods in image segmentation for automated object extraction across varying lighting conditions. This is crucial as the visual data processed by computers is pivotal for tasks such as classification, identification, and verification. Therefore, segmentation is an important stage of image processing, which consists of dividing the image into separate segments or regions corresponding to different objects in the image. Clustering methods can be used for automatic identification of object groups in images based on their properties, such as colors, textures, shapes, etc [
4]. In prior studies, upon which the foundation is based [
5,
6], one of the authors examined the impact of evaluating line detection methods across varied lighting conditions. In [
5], an external sensor was used to estimate the number of lux. In this study, the authors introduce an integrated approach where the camera’s collected data serves as a simultaneous source for lux meter readings. This decision aims to enhance the dependence of the objective assessment by integrating lighting adaptation procedures seamlessly. While [
5] employed MSE/PSNR for evaluation, this study adopts the Structural Similarity Index (SSIM) as a more precise metric. SSIM evaluates similarity between images by considering brightness, color, and structural differences [
7]. SSIM values range from -1 to 1, with 1 indicating identical images and -1 indicating complete dissimilarity. Typically, an SSIM above 0.9 signifies high image similarity. SSIM computes three components: luminance, contrast, and structure. Luminance assesses the overall brightness of the image, contrast measures variation across the image, and structure takes into account the interaction between different parts of the image.. By comparing histograms of these components, SSIM identifies image regions with the greatest disparities. In [
5], adaptive approach to line recognition was implemented, while this study focuses on evaluating common clustering methods. The authors used the SSIM method instead of artificial neural networks [
8] since the main goal of the work is the comparative evaluation of clustering methods. Using neural networks [
9] as a classifier would necessitate training and stability considerations.
The following clustering methods were selected for the study:
K-Means
K-Medoids
Fuzzy C-Means
Possibilistic C-Means
Fuzzy Possibilistic C-Means
Possibilistic Fuzzy C-Means
Gustafson-Kessel
Entropy-based Fuzzy
Riddler-Calvard
Kohonen Self-Organizing Maps
MeanShift
Clustering is the process of grouping objects into subsets or clusters based on similarities between them. Each cluster contains objects that are more similar to each other than to objects from other clusters. Similarity can be determined using various metrics such as Euclidean distance, cosine similarity, correlation, etc.
2. Materials and Methods
The purpose of the work is to develop an adaptive system for choosing segmentation methods depending on external conditions (in particular, the level of illumination of the field of attention).
Experimental design:
1. Creating a database of benchmarks of 9 classes using the global threshold method, which should reflect the approximation of human vision, since the threshold is chosen by a person with a subjective assessment of the result.
2. Objective assessment of the effectiveness of clustering methods under the given conditions. Clustering methods are used to extract objects from images, the effectiveness of which is assessed by calculating the difference between extracted objects and benchmarks using SSIM. Note that the results are averaged taking into account the accuracy and number of objects seen.
3. The results in the tables are averaged over three experiments (3 frames in a row are evaluated independently and the result is the average value). Note that there are no significant jumps in values between individual experiments. Therefore, there is no influence of a single experiment on the selection of the final winning method.
2.1. K-Means
The k-means clustering algorithm is one of the simplest unsupervised learning algorithms for solving the well-known clustering problem. The term “k-means” was first used by James McQueen in 1967 [1], using an idea proposed by Hugo Steinhaus in 1957 [
2].
Let
– be the set of data points, and
– be the set of cluster centers. The
k-means clustering algorithm attempts to partition (or cluster)
n data points into
k disjoint subsets
containing the data points in such a way as to minimize the sum-of-squares criterion:
here
– is a sample in the data set, and
– is the geometric centroid of the data points in the cluster
. Clustering is performed by minimizing the sum of squared distances between data points and the corresponding cluster center.
An implementation from the opencv-python library was used for the experiment [
3].
2.2. K-Medoids
The k-medoids algorithm, introduced by Leonard Kaufman and Peter Rousseau along with their PAM algorithm [
4], is a clustering technique akin to the k-means method. Both algorithms involve partitioning the dataset into groups, aiming to minimize the distance between the data points assigned to a cluster and the designated center point of that cluster. However, there are notable distinctions between them.
Unlike the k-means algorithm, which selects the average value of points within a cluster as its center point, k-medoids opt for actual data points (referred to as medoids or samples) as the cluster centers. This characteristic enhances the interpretability of cluster centers, as they directly correspond to existing data points. Furthermore, k-medoids offer the flexibility of utilizing various distance measures, while k-means typically rely on the Euclidean distance for efficient solutions.
One advantageous aspect of the k-medoids algorithm is its robustness to noise and outliers. By minimizing the sum of pairwise differences rather than the sum of squared Euclidean distances, k-medoids exhibit greater resilience in aberrant data points. This sets it apart from k-means, making it a valuable tool in scenarios where noise and outliers are prevalent.
An implementation from the scikit-learn-extra library was used for the experiment [
5].
2.3. Fuzzy C-Means (FCM)
The FCM algorithm belongs to fuzzy (soft) clustering methods, which is a form of clustering in which each data point can belong to more than one cluster.
Fuzzy c-means clustering was developed by James Dunn in 1973 [
6] and improved by James Bezdek in 1981 [
7].
Suppose that it is necessary to cluster n×m-dimensional data points represented by (i = 1, 2,...,n).
The algorithm returns a list of
c cluster, enters
and a partition matrix
,
n, where
indicates the degree of belonging of the element
to cluster
. Here
and
. The FCM algorithm is aimed at minimizing the current objective function:
where
m – parameter of the fuzzy partitioning of the matrix.
An implementation from the fuzzy-c-means library was used for the experiment [
8].
2.4. Possibilistic C-Means (PCM)
To prevent outliers, another clustering technique was proposed by Krishnapuram and Keller (1993), called PCM [
9]. In contrast to the FCM algorithm, the membership value generated by the PCM algorithm can be interpreted as “the degree of membership or compatibility or typicality” (Krishnapuram and Keller, 1993). Degrees of typicality are determined to construct prototypes that characterize subcategories of data, taking into account both the common features of category members and their distinctive features compared to other categories. compared to other categories. Typical values about one cluster do not depend on other clusters’ prototypes. The degree of typicality helps distinguish between a very atypical and a partially atypical members of a cluster [
10].
The PCM algorithm relaxes the row sum constraint of the FCM algorithm. The main limitation of the PCM algorithm is that each membership value in
U can be anything between 0 and 1 or equal to any of them, i.e., that is 0 ≤
≤ 1. So, these values are called the typical characteristics of the data points in each cluster. The objective function of the PCM algorithm can be formulated as follows:
where
n – the total number of samples in a given data set;
c – number of clusters;
m – a parameter that determines the degree of blurring of the partition;
– distance;
U = [
] – fuzzy matrix partitioning.
, is called the scale or typicality parameter and is calculated from the data with the following formula:
where
n – the total number of samples in a given data set;
m ∈ [1, ∞) – it is a parameter that determines the degree of blurring of the partition;
and
– data attributes and cluster centroids;
U = [
] – fuzzy partitioning of the
matrix, consisting of degrees of membership of the sample
to each cluster
j.
The membership of
value, in the case of the PCM algorithm, is calculated from the following formula:
where
– distances;
– scale parameter.
An implementation from the scikit-
c-means library was used for the experiment [
11].
2.5. Possibilistic Fuzzy C-Means (PFCM)
To obtain a stronger candidate for fuzzy clustering, Pal, Pal, Keller, and Bezdek proposed the PFCM algorithm in 2005 [
12]. The PFCM algorithm can avoid overlapping clusters and at the same time is less sensitive to outliers (Pal et al. 2005). The PFCM algorithm uses a combination of the objective functions of the PCM and FCM algorithms. The objective function of the PFCM algorithm is:
The relative significance between membership values and typicality values is determined by parameters a and b (Timm et al., 2004) [
13].
Objective function is minimized by , i,j, m and η > 1, and X containing at least k different data.
The degree of belonging is updated according to the following formula:
The value of typicality is according to the following formula:
Prototypes are based on the following formula:
An implementation presented in [
14] was used for the experiment.
2.6. Fuzzy Possibilistic C-Means (FPCM)
Fuzzy Possibilistic c-Means (FPCM) is an extension of the classic Fuzzy c-Means (FCM) clustering algorithm. Similar to FCM, FPCM is a soft clustering algorithm that assigns to each data point several clusters with different degrees of membership. However, unlike FCM, FPCM allows you to take into account additional uncertainty in the clustering process by introducing a possible term to the objective function.
In FPCM, each data point is represented by a vector of membership values, where each value reflects the degree to which the point belongs to a certain cluster. The possibility term of the objective function suggests that a data point may not belong to any cluster, not with absolute certainty, but with some degree of possibility. This allows FPCM to better handle noise and outliers in the data compared to FCM.
The objective function of the FPCM algorithm includes degrees of membership and typicality as shown in the following equation:
Provided that
where
m and η exponents of vagueness and typicality. Taking into account the given restrictions and optimization conditions of c-means
, we determine the following initial conditions or extrema of the objective function in terms of the theorem of Lagrange multipliers:
2.7. Gustafson-Kessel (GK)
The Gustafson-Kessel (GK) algorithm is a clustering algorithm that extends the well-known fuzzy
c-means (FCM) algorithm to handle data with different cluster shapes and sizes. It was proposed by Dr. David Gustafson and William Kessel in 1979 [
15].
The algorithm returns a list of
k clusters with centers
The main feature of the GK algorithm is the local adaptation of the distance metric to the cluster shape by estimating the cluster covariance matrix and the corresponding adaptation of the distance matrix. The objective function for the GK algorithm is defined as
In this algorithm, each cluster is associated with a separate matrix
. Matrices
are used as optimization variables in the c-means functional, thus allowing each cluster to adapt the distance norm to the local topological structure of the data. The distance between the data point
and the center of the cluster
is
This objective function cannot be directly minimized concerning
, because it is linear concerning
. To obtain a feasible solution,
must be bounded in some way. A common way to achieve this is to restrict the determinant of
:
The coefficient
determines the volumes of individual clusters (if we do not know about the problem, we can assume
). Using the method of Lagrange multipliers, the following expression for
was obtained
where
, the so-called fuzzy covariance matrix of
jth cluster is obtained from the formula:
The initialization of the algorithm requires the definition of the same parameters as in the FCM algorithm. The GK algorithm finds clusters of any shape but requires more calculations than the FCM algorithm due to the need to calculate the determinant and the inverse matrix at each iteration.
An implementation presented in [
16] was used for the experiment.
2.8. Entropy-Based Fuzzy (EBF)
Yao et al. in 2000 presented an entropy-based fuzzy clustering algorithm [
17]. In this algorithm, the entropy values of the data points are first calculated. Then the data point with the minimum entropy value is selected as the center of the cluster. Data points that are not chosen in any of the clusters are called outliers. Consider a set
X of
N data points in an
M-dimensional hyperspace, where each data point
is represented by a set of
M values (i.e.,
). Thus, the data set can be represented by an
N ×
M matrix. The values of each dimension are normalized in the range [0.0 – l.0]. The Euclidean distance between any two data points (for example,
i and
j) is defined as follows:
The entropy value between two data points is in the range [0.0 – 1.0]. It is very small (close to 0.0) for very close or very distant pairs of data points and very high (close to 1.0) for those data points separated by a distance close to the average distance of all pairs of data points.
The total entropy value at data points
relative to all other data points is calculated as
where
– similarity between
and
, is normalized on the [0.0 – l,0] interval. During clustering, the data point with the minimum entropy value is selected as the center of the cluster. The similarity between any two points (i.e.,
i and
j) can be calculated as follows:
where
α – a numerical constant. Experiments with different values for
α show that it should be robust for all types of data sets, not just for certain data sets. The
α value is calculated based on the assumption that the similarity value
is set to 0.5, when the distance between two data points
is equal to the average distance
, which is calculated as
From (21), we can calculate
α as
So, α is determined by the data and can be calculated automatically.
2.9. Ridler-Calvard (RC)
The Ridler-Calvard method [
18] – is a method for determining the threshold value of an image, which is a process of converting a grayscale image into a binary image by dividing pixels into two groups: pixels that exceed a certain threshold value and those that are below it.
The method is based on the idea of maximizing the interclass variance of two groups of pixels. Interclass variance is a measure of how well two groups are separated from each other. The threshold value that maximizes this variance is chosen as the optimal threshold value.
The Riedler-Calvard method begins by assuming an initial threshold value and computing the average values of pixels above and below the threshold. After that, it iteratively adjusts the threshold value based on the average values until the difference between them is minimized.
The foreground and background cluster values are given as m_f and m_b, respectively, and are defined mathematically as:
where
– gray level values
,
– is the gray-level probability mass function (PMF) of g. The PMF is calculated from the image histogram by normalizing it to the total number of samples.
+ 1 – the new threshold value is calculated by averaging
and
as
These operations are repeated until the difference value is less than the given value of ε.
An implementation from the Mahotas library was used for the experiment [
19].
2.10. Kohonen Self-Organizing Maps (SOM)
The Self-Organizing Map (SOM) is a specific type of artificial neural network that differs from other neural networks in its training approach proposed by Teuvo Kohonen [
20]. Instead of employing error-correcting learning methods like backpropagation with gradient descent, SOM utilizes concurrent learning.
Similar to most artificial neural networks, self-organizing maps operate in two distinct modes: learning and mapping. During the learning phase, a set of input data, known as the “input space,” is utilized to construct a reduced-dimensional representation called the “map space.” This mapping process enables the classification of additional input data using the generated map.
The map space is composed of components referred to as “nodes” or “neurons,” arranged in a two-dimensional hexagonal or rectangular grid. The number and specific locations of these nodes are predetermined based on the desired objectives of the data analysis and research.
Each node in the map space is associated with a “weight” vector, representing its position in the input space. While the nodes in the map space remain fixed, the learning process entails adjusting the weight vectors towards the input data, typically by reducing a distance metric like Euclidean distance. Importantly, this adjustment must not disrupt the topology established by the map space.
Following the training phase, the map can be employed to classify additional observations from the input space. This is achieved by identifying the node with the closest weight vector (i.e., the smallest distance metric) to the input space vector.
The primary objective of self-organizing map learning is to induce similar responses to specific input patterns across different parts of the network. This phenomenon partly mirrors the processing of visual, auditory, or sensory information in specific regions of the human cerebral cortex.
The weights of the neurons are initialized either with small random values or by uniformly selecting values within the subspace spanned by the two largest eigenvectors of the principal components. The latter alternative leads to faster learning since the initial weights provide a reasonable approximation of the SOM weights.
To train the network effectively, a considerable number of example vectors, ideally representing the expected vector types during mapping, are fed into the network. These examples are often introduced multiple times through iterations.
During training, when an example is presented to the network, its Euclidean distance to all weight vectors is computed. The neuron with the weight vector most similar to the input is designated as the “best-matching unit” (BMU). The weights of the BMU and the neurons in proximity to it in the SOM grid are adjusted based on the input vector. The magnitude of this adjustment decreases over time and with increasing distance from the BMU. The update formula for neuron
v ith weight vector
is calculated accordingly.
where
s – step index,
t – index into the training sample,
u – the BMU index for the input vector
D(t),
α(s) – monotonically decreasing learning rate;
θ(u, v, s) – a neighborhood function that defines the distance between neuron
u and neuron
v at step
s.
The neighborhood function, denoted as θ(u, v, s) or the lateral interaction function, plays a vital role in the self-organizing map. It depends on the distance between the best matching unit (BMU) neuron u and neuron v within the grid. The simplest form of the neighborhood function assigns a value of 1 to neurons that are close enough to the BMU and 0 to others. However, Gaussian functions and Ricker wavelets are also commonly used alternatives. Regardless of the specific form chosen, the neighborhood function gradually decreases over time.
During the initial stages when the neighborhood is broad, self-organization occurs on a global scale. As the neighborhoods shrink to pairs of neurons, the weights start to converge toward local estimates. In some implementations, both the learning coefficient α and the neighborhood function θ decrease gradually as the parameter s increases. In other cases, particularly when the training data set is traversed by the parameter t, the decrease occurs stepwise, once every T steps. This iterative process is repeated for each input vector over a typically large number of λ cycles. Ultimately, the network associates the output nodes with groups or patterns present in the input data set. If these patterns are identifiable, their names can be linked to the corresponding nodes in the trained network.
During the mapping phase, a single winning neuron is determined—the neuron whose weight vector is closest to the input vector. This determination can be made by simply calculating the Euclidean distance between the input vector and the weight vector.
An implementation from the sklearn-som library was used for the experiment [
21].
2.11. MeanShift
MeanShift is a clustering algorithm that assigns data points to clusters iteratively by shifting the points towards the mode (the mode is the highest density of data points in a region in the context of MeanShift). Therefore, it is also known as the fashion search algorithm [
22].
We start with an initial estimate of
. Let the kernel function
is given. This function determines the weight of the closest points to re-estimate the mean. A Gaussian kernel at the distance to the current estimate is usually used:
The weighted average value of the density in the window defined by
K is calculated as:
where
N(x) is a neighborhood of
x, a set of points for which
.
Difference
is called the mode shift (mean shift) according to Fukunaga and Hostetler [
23]. The mode shift algorithm set
, and repeats the evaluation until
converges.
An implementation from the scikit-learn library was used for the experiment [
24].
3. Results
Following tables display experimental results showing SSIM value received after comparison between benchmark of a class and object extracted under specified conditions.
Table 1.
Evaluation of the K-means method using SSIM.
Table 1.
Evaluation of the K-means method using SSIM.
|
class number |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
lux |
|
100 |
0.377 |
0.635 |
0.824 |
0.774 |
0.740 |
0.688 |
0.785 |
0.633 |
0.528 |
150 |
0.441 |
0.707 |
0.711 |
0.660 |
0.618 |
0.735 |
0.875 |
0.805 |
0.640 |
200 |
0.437 |
0.590 |
0.753 |
0.827 |
0.889 |
0.871 |
0.967 |
0.966 |
0.919 |
250 |
0.484 |
0.602 |
0.703 |
0.552 |
0.996 |
0.857 |
0.641 |
0.845 |
0.670 |
Table 2.
Evaluation of the k-medoids method using SSIM.
Table 2.
Evaluation of the k-medoids method using SSIM.
|
class number |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
lux |
|
100 |
0.361 |
0.622 |
0.824 |
0.774 |
0.737 |
0.536 |
0.785 |
0.615 |
0.600 |
150 |
0.802 |
0.702 |
0.692 |
0.657 |
0.661 |
0.705 |
0.743 |
0.796 |
0.677 |
200 |
0.468 |
0.605 |
0.753 |
0.851 |
0.879 |
0.844 |
0.839 |
0.874 |
0.679 |
250 |
0.432 |
0.584 |
0.723 |
0.598 |
0.842 |
0.507 |
0.641 |
0.874 |
0.692 |
Table 3.
Evaluation of the Fuzzy C-Means method using SSIM.
Table 3.
Evaluation of the Fuzzy C-Means method using SSIM.
|
class number |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
lux |
|
100 |
0.361 |
0.622 |
0.824 |
0.774 |
0.737 |
0.536 |
0.785 |
0.615 |
0.600 |
150 |
0.760 |
0.709 |
0.698 |
0.688 |
0.742 |
0.776 |
0.902 |
0.805 |
0.615 |
200 |
0.511 |
0.591 |
0.807 |
0.871 |
0.904 |
0.844 |
0.920 |
0.909 |
0.703 |
250 |
0.613 |
0.578 |
0.703 |
0.574 |
0.999 |
0.857 |
0.658 |
0.885 |
0.707 |
Table 4.
Evaluation of the Possibilistic C‐Means method using SSIM.
Table 4.
Evaluation of the Possibilistic C‐Means method using SSIM.
|
class number |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
lux |
|
100 |
- |
- |
- |
- |
- |
- |
- |
- |
- |
150 |
- |
- |
- |
- |
- |
- |
- |
- |
- |
200 |
- |
- |
- |
- |
- |
- |
- |
- |
- |
250 |
- |
- |
- |
- |
- |
- |
- |
- |
- |
Table 5.
Evaluation of the Possibilistic Fuzzy C-Means method using SSIM.
Table 5.
Evaluation of the Possibilistic Fuzzy C-Means method using SSIM.
|
class number |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
lux |
|
100 |
0.361 |
0.622 |
0.824 |
0.774 |
0.737 |
0.536 |
0.785 |
0.615 |
0.600 |
150 |
0.841 |
0.699 |
0.711 |
0.662 |
0.661 |
0.717 |
0.790 |
0.785 |
0.679 |
200 |
0.511 |
0.591 |
0.807 |
0.871 |
0.904 |
0.844 |
0.920 |
0.909 |
0.703 |
250 |
0.613 |
0.578 |
0.703 |
0.574 |
0.999 |
0.857 |
0.658 |
0.885 |
0.707 |
Table 6.
Evaluation of the Fuzzy Possibilistic C-Means method using SSIM.
Table 6.
Evaluation of the Fuzzy Possibilistic C-Means method using SSIM.
|
class number |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
lux |
|
100 |
0.956 |
0.634 |
0.757 |
0.683 |
0.744 |
0.664 |
0.844 |
0.605 |
0.499 |
150 |
0.390 |
0.687 |
0.719 |
0.797 |
0.669 |
0.822 |
0.935 |
0.815 |
0.615 |
200 |
- |
- |
- |
- |
- |
- |
- |
- |
- |
250 |
0.493 |
0.790 |
0.816 |
0.999 |
0.980 |
0.880 |
0.973 |
0.987 |
0.993 |
Table 7.
Evaluation of the Gustafson-Kessel method using SSIM.
Table 7.
Evaluation of the Gustafson-Kessel method using SSIM.
|
class number |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
lux |
|
100 |
- |
- |
- |
- |
- |
- |
- |
- |
- |
150 |
0.364 |
0.706 |
0.757 |
0.727 |
0.759 |
0.815 |
0.953 |
0.798 |
0.594 |
200 |
- |
- |
- |
- |
- |
- |
- |
- |
- |
250 |
- |
- |
- |
- |
- |
- |
- |
- |
- |
Table 8.
Evaluation of the Entropy-based Fuzzy method using SSIM.
Table 8.
Evaluation of the Entropy-based Fuzzy method using SSIM.
|
class number |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
lux |
|
100 |
0.361 |
0.622 |
0.824 |
0.774 |
0.737 |
0.536 |
0.785 |
0.615 |
0.600 |
150 |
0.760 |
0.709 |
0.698 |
0.688 |
0.742 |
0.776 |
0.902 |
0.805 |
0.615 |
200 |
0.511 |
0.591 |
0.807 |
0.871 |
0.904 |
0.844 |
0.920 |
0.909 |
0.703 |
250 |
0.613 |
0.578 |
0.703 |
0.574 |
0.999 |
0.857 |
0.658 |
0.885 |
0.707 |
Table 9.
Evaluation of the Riddler-Calvard method using SSIM.
Table 9.
Evaluation of the Riddler-Calvard method using SSIM.
|
class number |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
lux |
|
100 |
0.358 |
0.623 |
0.824 |
0.774 |
0.738 |
0.684 |
0.785 |
0.634 |
0.621 |
150 |
0.441 |
0.707 |
0.711 |
0.660 |
0.618 |
0.735 |
0.875 |
0.805 |
0.640 |
200 |
0.544 |
0.591 |
0.821 |
0.881 |
0.918 |
0.844 |
0.922 |
0.909 |
0.898 |
250 |
0.484 |
0.602 |
0.703 |
0.552 |
0.996 |
0.857 |
0.641 |
0.845 |
0.670 |
Table 10.
Evaluation of the Kohonen Self-Organizing Maps method using SSIM.
Table 10.
Evaluation of the Kohonen Self-Organizing Maps method using SSIM.
|
class number |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
lux |
|
100 |
- |
- |
- |
- |
- |
- |
- |
- |
- |
150 |
0.572 |
0.711 |
0.647 |
0.702 |
0.736 |
0.787 |
0.902 |
0.630 |
0.536 |
200 |
- |
- |
- |
- |
- |
- |
- |
- |
- |
250 |
- |
- |
- |
- |
- |
- |
- |
- |
- |
Table 11.
Evaluation of the MeanShift method using SSIM.
Table 11.
Evaluation of the MeanShift method using SSIM.
|
class number |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
lux |
|
100 |
0.956 |
0.628 |
0.750 |
0.679 |
0.744 |
0.633 |
0.828 |
0.605 |
0.483 |
150 |
0.322 |
0.686 |
0.803 |
0.728 |
0.759 |
0.780 |
0.953 |
0.780 |
0.587 |
200 |
0.459 |
0.797 |
0.967 |
0.780 |
0.941 |
0.948 |
0.939 |
0.899 |
0.749 |
250 |
0.916 |
0.999 |
0.918 |
0.734 |
0.941 |
0.999 |
0.953 |
0.925 |
0.749 |