Preprint
Article

Spectral Reflectance Estimation from Camera Response Using Local Optimal Dataset and Neural Networks

Altmetrics

Downloads

80

Views

29

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

09 August 2024

Posted:

12 August 2024

You are already at the latest version

Alerts
Abstract
In this study, a novel method is proposed to estimate surface-spectral reflectance from camera responses that combines model-based and training-based approaches. An imaging system is modeled using the spectral sensitivity functions of an RGB camera, spectral power distributions of multiple light sources, unknown surface-spectral reflectance, additive noise, and a gain parameter. The estimation procedure comprises two main stages: (1) selecting the local optimal reflectance dataset from a reflectance database and (2) determining the best estimate by applying a neural network to the local optimal dataset only. In stage (1), the camera responses are predicted for the respective reflectances in the database, and the optimal candidates are selected in the order of lowest prediction error. In stage (2), most reflectance training data are obtained by a convex linear combination of local optimal data using weighting coefficients based on random numbers. A feed-forward neural network with one hidden layer is used to map the observation space onto the spectral reflectance space. In addition, the reflectance estimation is repeated by generating multiple sets of random numbers, and the median of a set of estimated reflectances is determined as the final estimate of the reflectance. Experimental results show that the estimation accuracies exceed those of other methods.
Keywords: 
Subject: Computer Science and Mathematics  -   Computer Science

1. Introduction

Knowledge of the surface-spectral reflectances of objects is essential in fields such as color science, image science and technology, computer vision, and computer graphics. Therefore, issues in estimating the surface-spectral reflectances from camera responses have been studied alongside the development of cameras and imaging systems, leading to the proposal of numerous methods. Methods used to estimate the surface-spectral reflectances based on camera responses can be classified into two primary approaches: model-based approach [1,2,3,4,5,6,7,8,9,10,11,12,13,14] and training (or learning)-based approach [15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30].
In the model-based approach, the camera responses are described using camera spectral sensitivities, surface-spectral reflectance, and illuminant spectral power distributions. This is the traditional and more commonly used approach and includes finite-dimensional modeling methods [1,3] and Wiener estimation methods [4,5,6,7,8,9,10,11,12]. Wiener estimation methods are based on a statistical approach in which noise in the imaging system and a certain spectral reflectance statistic are considered. Linear minimum mean square error (LMMSE) [13] is an improved Wiener estimation method. Recently, a method [14] was proposed to estimate the surface-spectral reflectances from camera responses using a local optimal reflectance dataset, in which a spectral reflectance database was utilized to locally determine the candidates to optimally estimate the spectral reflectance. The best spectral reflectance was effectively estimated using only the local optimal dataset, without using the entire spectral reflectance database.
The training-based approach is typically constructed without knowledge of the camera spectral sensitivities and illuminant spectral distributions. Rather, it uses a large training dataset, which is a large table comprising a pair of camera responses and the corresponding spectral reflectances. Regression methods directly establish the relationship between RGB responses and spectral reflectances and include support vector regression [19,20], kernel regression [21,22], and linear regression [23,24].
The use of neural networks has been considered to construct a map between the low-dimensional color signal space and higher-dimensional spectral space [28,29]. For instance, a neural network method to estimate spectral reflectance was applied to a dual imaging system with a color projector and color camera, where mapping was constructed between 6-dimensional color signals and the spectral space [30]. The neural network was trained using numerous samples with known spectral reflectance.
In this study, we propose a novel method that combines model- and training-based approaches to improve the estimation accuracy of spectral reflectance from image data. The proposed method comprises two stages. The first stage is based on the model base, where the local optimal reflectance dataset is selected as a set of the most reliable candidates for reflectance estimation from a standard reflectance database. The second stage employs a training-based method, where the best estimate is determined by applying a neural network method to the selected local optimal dataset only. Our imaging system is a multispectral image acquisition system extended from a simple RGB system, in which an RGB camera captures multiple images of an object scene under multiple light sources with different illuminant spectra in the visible range.
In the following, Section 2 describes the observation model for an image-acquisition system that uses an RGB camera and multiple light sources. We adopt a general model in which the camera responses are described by combining camera spectral sensitivities, illuminant spectral power distributions, unknown surface-spectral reflectance, additive noise terms, and a gain parameter.
Section 3 describes the development of the proposed spectral estimation method. First, we describe the selection of the local optimal reflectance dataset. The actual camera responses for the target object are compared with the observations predicted from the respective spectral reflectances in the reflectance database. Prediction errors are calculated for all reflectances in the database, and the local optimal candidates for reflectance estimation are selected in the order of the lowest prediction error. Second, we determine the best reflectance using a neural network based only on a locally optimal dataset. A random convex linear combination of the local optimal dataset becomes the training data of reflectance for the neural network, and the network is trained to minimize the mean square error (MSE). An additional procedure is presented to obtain reliable reflectance estimates.
Section 4 presents the experiments performed to validate the proposed methods for estimating the surface spectral reflectances. Various mobile phone cameras, LED light sources, a standard spectral reflectance database, and standard test samples are used in these experiments. The performance of the proposed method is examined in detail and compared with that of other methods.
Section 5 discusses the relationship between the statistics of the random numbers used and estimation accuracy.

2. Observation Model

The observation model of our image-acquisition system is shown in Figure 1 (see [14]). It was constructed using an RGB camera with three color channels (c = 1, 2, 3) and multiple light sources with L different illuminant spectra (l = 1, 2, ..., L). Hence, we obtained m =3L observations for a single target object.   The observation y i of the camera outputs is expressed as follows:
y i = g 400 700 x ( λ ) e l ( λ ) r c ( λ ) d λ + n i , ( i = 1 , 2 , m ) ,
where x ( λ ) is the surface-spectral reflectance of the target object, e l ( λ ) (l=1, 2, …, L) represent the spectral power distribution of the light sources, r c ( λ ) (c=1, 2, 3) denote the spectral sensitivity functions of the camera. The wavelength λ is in the visible range of 400–700 nm. The additive noise n i in the imaging system is assumed to be white noise with zero mean and variance a and is uncorrelated with x ( λ ) . Here, y i represent the digital camera outputs, while x ( λ ) , e l ( λ ) , and r c ( λ ) are physical quantities. The coefficient g in Eq. (1) is a gain parameter used to convert the model outputs to the practical digital output. The parameter g is unique to the imaging system and depends on the conditions of the imaging system, such as the locations of the camera and light sources, including illumination intensities. How to determine the noise variance a and the gain parameter g was shown in [13].
The spectral functions of reflectance, illuminants, and sensitivities are sampled at n wavelength points with equal intervals in the range of 400–700 nm and described using n-dimensional column vectors as follows:
x = x ( λ 1 ) x ( λ 2 ) x ( λ n ) , e l = e l ( λ 1 ) e l ( λ 2 ) e l ( λ n ) , r c = r c ( λ 1 ) r c ( λ 2 ) r c ( λ n ) ,
where i= 1, 2, …, L and c = 1, 2, 3. The discrete representation of the observation model is expressed as
y = g A x + n ,
Where
y = y 1 y 2 y m , A = e 1 . * r 1 t Δ λ e 2 . * r 2 t Δ λ e L . * r 3 t Δ λ , n = n 1 n 2 n m
The symbol (.*), superscript t, and Δ λ represent element-wise multiplication, matrix transposition, and the wavelength sampling interval, respectively. Therefore, A is an (m × n) matrix defined by the illuminant spectra and spectral sensitivities and n is an n-dimensional noise vector.

3. Reflectance Estimation Method

3.1. Selection of Local Optimal Reflectance Dataset

Figure 2 shows the standard database of surface-spectral reflectance used in this study, which comprises Dupont spectral data, Munsell spectral data, and various object spectral data, including manmade objects such as papers, paints, and plastics, as well as natural objects such as rocks, leaves, skins, oranges, and apples. This database is available at http://ohlab.kic.ac.jp/, which is a dataset of 1776 spectral reflectances. Let N D (1776) be the number of spectral reflectances in a database. All spectral curves are sampled at 61 (= n) points with 5-nm intervals in the visible range of 400–700 nm and represented by 61-dimensional column vectors x i (i =1, 2, ..., N D ).
First, the observations are predicted using Eq. (3) as an g A x i   for each spectral reflectance x i in the database: The prediction error for observation y is then calculated as follows:
L i = y g A x i   2 2   ( i = 1 ,   2 ,   . . . , N D ) ,
where norm 2 2 is defined as z 2 2 = z 1 2 + z 2 2 + ... + z m 2 . Secondly, the prediction errors are arranged in ascending order as L ( 1 ) L ( 2 ) L ( N D ) , and the corresponding spectral reflectances are x ( 1 ) , x ( 2 ) , …, x ( N D ) . Finally, the first K spectral reflectances, x ( 1 ) , x ( 2 ) , …, x ( K ) are selected as local optimal candidates to estimate the spectral reflectance.

3.2. Determination of Reflectance Estimate Using Neural Network

The best estimate is determined using a neural network based only on the local optimal dataset x ( 1 ) , x ( 2 ) , ... , x ( K ) .

3.2.1. Making the Training Data

The training data are a large table comprising a pair of spectral reflectances and corresponding observations. The training data for the spectral reflectance are composed of the original local optimal dataset obtained in Section 3.1 and the augmented data made by the convex linear combination of the local optimal dataset. Let N T be the number of training data and x ^ i (i=1,2, …, N T ) be the spectral reflectance used for training. The spectral reflectances are described as
x ^ i = x ( i ) , ( i = 1 , 2 , ... , K ) x ^ i = α 1 x ( 1 ) + α 2 x ( 2 ) + + α K x ( K ) , ( i = K + 1 , K + 2 , ... , N T ) ,
where the scalar weighting coefficients are normalized as
i = 1 K α i = 1 , α i 0. , ( i = 1 , 2 , K ) 0
Especially, we make the coefficients using random numbers as
α i = u i / u 1 + u 2 + + u K
where u i is the random number with a uniform distribution over [0, 1]. Each component of the generated x ^ i in (6) lies between 0 and 1. The corresponding training data for the observations are as follows:
y ^ i = g A x ^ i + n i , ( i = 1 , 2 , ... , N T )
where n i is the m-dimensional noise vector, whose j-th element n i j (j=1, 2, ..., m) is assumed to be Gaussian white noise with zero mean and variance a. Therefore, we generate additive noise using the random number randn with a standard normal distribution as follows:
n i j = a * r a n d n

3.2.2. Network Architecture and Learning Procedure

We use a simple feed-forward neural network with one hidden layer to construct a mapping from the observation space to the spectral reflectance space. Because the observation and reflectance spaces have m- and n-dimensions, respectively, the network is constructed with a structure of m-N-n as shown in Figure 3, where N indicates the number of units in the hidden layer.
The MATLAB machine learning functions are used to construct the network [31]. Network training is performed using the
net = feedforwardnet(N),
net = train(net, xdata, tdata),
where xdata and tdata indicate the training dataset and corresponding target (output) dataset, respectively, which have the following forms:
x d a t a = x d a t a train , x d a t a val , x d a t a test , t d a t a = t d a t a train , t d a t a val , t d a t a test
The entire training data of spectral reflectances x ^ i ( i = 1 , 2 , ... , N T ) are segmented randomly into x d a t a train for the network training and x d a t a val for validation. In the same way, the entire training data of the corresponding observations y ^ i ( i = 1 , 2 , ... , N T ) are segmented into t d a t a train and t d a t a val . The training algorithm is based on the Levenberg–Marquardt method and the training is iterated to reduce the MSE to an acceptable level.
The spectral reflectance corresponding to the test observation data t d a t a test is predicted using the trained network as follows:
x est = s i m n e t ,   t d a t a test

3.2.3. Determining the Optimal Reflectance Estimate

Because the augmented training data are generated using random numbers, the reflectance estimates predicted above for spectral reflectance may include outliers that differ significantly from the predictions. To improve the reliability of the reflectance estimation, the reflectance estimation is repeated and then the median in a set of the estimated reflectances is determined as the final spectral reflectance.
x ^ fin = m e d i a n x est
Suppose we repeat the estimation process R times. Let x j , k be the estimate of the j-th element of the n-dimensional vector x est in the k-th trial, arranged in ascending order as x j , 1 x j , 2 x j , R . The final estimate after taking the median is then described for odd and even numbers of iterations R as follows:
x ^ j , fin = x j , R + 1 2 R : odd 1 2 x j , R 2 + x j , R 2 + 1 R : even ( j = 1 , 2 , ... , n )
Figure 4 depicts the overall flow of the proposed method for estimating spectral reflectance in three steps.

4. Experimental Results

4.1. Experimental Setup

We performed experiments to validate the superiority of the proposed method for estimating surface-spectral reflectance from image data. We used a mobile phone camera, LED light sources, a standard spectral reflectance database, and standard test samples. The mobile phone camera was an Apple iPhone 6s with iOS; to further confirm the validity of the different cameras, we additionally used an Apple iPhone 8 with iOS and a Huawei P10 lite with Android OS. Figure 5 shows the relative RGB spectral sensitivity functions of the Apple iPhone 6s. The numerical data for the spectral sensitivities are available at http://ohlab.kic.ac.jp/. Camera images were captured in a lossless raw image format in an Adobe digital negative format. The dark response was measured under dark conditions and was discarded from the camera output. The camera depth was 12 bits.
The illumination light sources were seven (L = 7) LED light sources, the spectral power distributions of which are shown in Figure 6. The standard spectral reflectance database used in the experiments is shown in Figure 2. An X-Rite Color Checker Passport Photo was used as the standard test target to validate the reflectance estimation. This target comprised 24 color checkers, whose spectral reflectance values were measured using a spectral colorimeter.
Spectralon was used as a white reference standard to investigate the statistical properties of this imaging system, which was placed near the target samples. The positions are similar to those in the previous paper [13]. The parameters g and a of the gain and noise variance in the observation model, respectively, were determined using the calibration method in [13] based on the Spectralon data.
Since neural network processing takes a lot of time, we used a PC equipped with an NVIDIA GeForce RTX Graphics Processing Unit (GPU).

4.2. Bacic Performance of Proposed Method.

In a previous study [14], we investigated the number K of local optimal reflectance candidates using different reflectance estimation methods and found that the appropriate K value was in the range of 5-50. Therefore, we set K=25 in the current experiments and generated the training data based on the local optimal dataset x ( 1 ) , x ( 2 ) , ... , x ( 25 ) . The 600 augmented reflectance data were obtained by linear combinations of the 25 local optimal reflectances according to Eqs. (6)–(8). Overall, we had 625 ( N T ) spectral reflectances as training data. Matrix A was created using the spectral sensitivity functions shown in Figure 5, and the spectral distributions of the light sources shown in Figure 6. The corresponding training data for the observation y ^ i were made for the respective reflectance x ^ i (i=1,2, …, 625). The locally optimal reflectance dataset and training data were determined for each test target sample.
Our feedforward network had a structure of 21-80-61 in Figure 3, which was constructed with 21 inputs, one hidden layer of 80 units, and 61 outputs. The total number of reflectance and observation pairs in our dataset for each sample was 625. Of these, 550 were randomly selected for training the network and the remaining 75 were used as validation data to investigate the proposed network method. Each pair of training data constituted the network input and output. One period in which the entire training dataset was presented was defined as an epoch. The training was iterated for as many epochs as necessary to reduce the MSE to an acceptable level. After eight epochs, the error in the validation data was sufficiently small.
The observation data for each test sample were input into the learned neural network to obtain an estimate of the spectral reflectance x est . Furthermore, to improve the estimation accuracy, we repeated the learning and testing process 10 times and finally adopted the median in the estimated reflectance set x est as the final spectral reflectance estimate x ^ fin .
Figure 7 shows the estimation results of the above procedure for the 24 spectral reflectance of the X-Rite Color Checker. In the figure, two types of curves are compared: bold curves indicate the estimated spectral reflectances for the 24 color checkers, and broken curves indicate the directly measured spectral reflectances. The average root-mean-square error (RMSE) was calculated as the root of the average of the squared norm of the estimation error per wavelength over the 24 color checkers:
  E ^   [ RMSE ] = i = 1 24 x i x ^ fin , i 2 / 61 / 24 1 / 2
The average RMSE was 0.0173. The estimated spectral curves in Figure 7 were smoothed using moving-average processing. However, this process hardly changed the errors.
Furthermore, the performance of the proposed method was compared with those of other well-known state-of-the-art methods for estimating spectral reflectance. The estimation accuracies of the six methods were investigated using the same reflectance database, camera data, and test samples described above. Figure 8 compares the average RMSEs between the proposed method and the other methods, where the symbols of Wiener, LMMSE, L_Wiener, L_LMMSE, Lp, and Qp represent the six estimation methods of (1) original Wiener, (2) original LMMSE [13], (3) local Wiener, (4) local LMMSE, (5) linear programming, and (6) quadratic programming [14], respectively. The local optimal dataset was used in Methods (3)–(6). The estimation accuracy of the proposed method is significantly superior, although the RMSEs of (3)–(6) vary slightly depending on the number of local optimal reflectance candidates K.

4.3. Effectiveness of Local Optimal Reflectance Dataset

To confirm the effectiveness of the local optimal reflectance dataset in estimating spectral reflectance, we examined several reflectance estimation methods without using the local optimal dataset and with only a neural network. All the data in the standard spectral reflectance database were used without selection.
First, by making the network structure multilayered and large-scale, complex mapping becomes possible to ensure that the estimation accuracy can be improved, even if learning takes a long time.  Based on this idea, we constructed networks with three hidden layers with two types of structures: (1) 21-30-30-30-61 and (2) 21-30-40-50-61. The total number of training datasets was 1776, of these 1576 reflectances were used for network training, and 200 reflectances were used for validation. The average RMSE after 10 epochs was 0.033003 and 0.0300666 for (1) and (2), respectively. Figure 9 shows the results estimated by network method (2) for the 24 spectral reflectances of the color checker. The estimation accuracy is significantly worse than the results in Section 4.2.
Next, we considered improving the estimation accuracy by increasing the amount of training data. Additional spectral reflectances were obtained by augmentation with a convex linear combination of the original reflectance data. We randomly selected 10 reflectances from the original dataset and augmented the data using a convex linear combination of these, where the weighting coefficients were normalized to satisfy Eq. (7). Among the data, 4000 reflectances were used for network training, and 200 reflectances were used for validation. The network structure was 21-30-30-30-61. In this third case (3), the average RMSE after 10 epochs was 0.031144.
Thus, we see that the local optimal reflectance dataset is crucial for estimating the spectral reflectance.

4.4. Validity to Different Cameras

The performance evaluation described above was based on a single mobile phone camera iPhone 6s. To further confirm the validity of the different cameras, we used an Apple iPhone 8 with iOS and Huawei P10 lite with Android OS. The spectral sensitivity function data for these cameras are available at http://ohlab.kic.ac.jp/. The LED light sources, spectral reflectance database, and test samples of the 24 color checkers were the same as those shown in Section 4.1. Parameters g and a in the observation model were determined using the same calibration method.
The 24 spectral reflectances of the X-Rite Color Checker were estimated based on the proposed method using observations from each camera, and the estimation accuracy was validated through comparison with other methods. The average RMSEs were 0.0144 and 0.0281 for Phone 8 and Huawei P10 lite, respectively. The estimation error increased when a Huawei camera was used. Figure 10 and Figure 11 compare the average RMSEs between the proposed method and the six other methods for the iPhone 8 and Huawei P10 lite, respectively. The estimation accuracies of the proposed method are overwhelmingly superior for both cameras, as demonstrated in the iPhone 6s.

5. Discussion

Most training data for spectral reflectance are augmented data generated by a linear combination of local optimal dataset. The weighting coefficients are calculated using random numbers. Therefore, the final spectral reflectance estimates are affected by the statistics of the random numbers used.  
The augmented spectral reflectances defined in Eqs. (6)-(8) are rewritten as follows:
x ^ = i = 1 K u i u 1 + u 2 + + u K x ( i ) , ( i = 1 , 2 , K )
where x ( i ) are the local optimal reflectances and u i are independent and identically distributed random numbers with u i > 0 . Let σ and u ¯ be the standard deviation and mean of u i , respectively. The coefficient of variation Cv is then defined as the standard deviation divided by the mean as
C v = σ u ¯
This measure is a statistical index showing the relative variation of the random numbers.
Let us calculate Cv for specific distributions of random numbers.
(a) When u i following a uniform distribution over [0, 1], C v = 1 3 .
(b) When u i following a Chi-square ( χ 2 ) distribution with one degree of freedom, C v = 2 .
In case (b), the variation is 6 times larger than that in case (a).
Because the coefficient of variation differs depending on the distribution of the random numbers, it is likely to affect the final estimation accuracy of the spectral reflectance. Therefore, in addition to the above experiments using uniform random numbers, we conducted experiments on spectral reflectance estimation using random numbers with a chi-square distribution and compared the estimation accuracies in both cases. Table 1 compares the average RMSEs of the reflectance estimation using random numbers with two different distributions and three different cameras. The training data for spectral reflectance based on uniform random numbers with smaller Cv are superior in terms of estimation accuracy.

6. Conclusions

In this study, we have proposed a novel method that combines model-based and training-based approaches to improve the estimation accuracy of surface-spectral reflectance from the camera response to an object surface. A multispectral image acquisition system was modeled in the visible wavelength range using three spectral functions: the spectral sensitivities of an RGB camera, spectral power distributions of multiple LED light sources, and unknown surface-spectral reflectance. Camera response was described as a generalized linear model that includes additive noise and a gain parameter.
The proposed method comprised two main stages: the first stage was based on the model base, where the local optimal reflectance dataset was selected as the most reliable candidate set from a standard reflectance database, and the second stage was based on the training-based method, where the best estimate was determined by applying a neural network method to the selected local optimal dataset only.
In the first stage, the camera response observations were predicted for the respective reflectances in the database, and the local optimal candidates were selected in the order of the lowest error between the real observation and prediction. In the second stage, the training data for spectral reflectances consisted of the original locally optimal dataset and augmented data generated by a convex linear combination of this dataset, with weighting coefficients derived from random numbers. A simple feedforward neural network with one hidden layer was used to construct the mapping from the low-dimensional observation space using the camera response to the high-dimensional spectral reflectance space.
The neural network was trained to minimize MSE. To further improve the reliability of the estimate, the reflectance estimation was repeated, and the median in a set of estimated reflectances was determined as the final spectral reflectance.
Experiments were conducted using three mobile phone cameras, seven LED light sources, a standard spectral reflectance database with 1776 reflectances, and 24 color checkers. The performance of the proposed method was examined in detail. We investigated estimation methods based on a neural network using the entire database without selection and confirmed the effectiveness of the local optimal reflectance dataset. We demonstrated that the estimation accuracies of the proposed method exceed those of the other methods. Furthermore, we discussed the statistics of the random numbers used, which affects the estimation accuracy.
The strength of the proposed method is its outstanding estimation accuracy. However, a key challenge for the future is reducing the computation time required for obtaining estimation results, which is heavily dependent on the processing speed of the computer used.

Author Contributions

Conceptualization, S. T. and H. S.; methodology, S. T. and H. S. software, S. T.; validation, S. T.; formal analysis, S. T. and H. S.; investigation, S. T. and H. S.; resources, S. T.; data curation, S. T. and H. S.; writing—original draft preparation, S. T.; writing—review and editing, S.T. and H.S.; visualization, S.T. and H.S.; supervision, S.T. and H.S.; project administration, S.T.; funding acquisition, S.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by a Grant-in-Aid for Scientific Research (C) Grant Number 24K15014.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data underlying the results presented in this paper are available at http://ohlab.kic.ac.jp/ (accessed August 1, 2024).

Acknowledgments

The authors would like to thank Shogo Nishi at Osaka Electro-Communication University and Ryo Ohtera at Kobe Institute of Computing for their assistance with the multiband image data.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. STominaga, Multichannel vision system for estimating surface and illuminant functions. J. Opt. Soc. Am. A 1996, 13, 2163–2173.
  2. F. H. Imai and R. S. Berns, Spectral estimation using trichromatic digital cameras, International Symposium on Multispectral Imaging. and Color Reproduction for Digital Archives 1999, 42–49, Chiba, Japan.
  3. Mansouri, T. Sliwa, J. Y. Hardeberg, and Y. Voisin, Representation and estimation of spectral reflectances using projection on PCA and wavelet bases, Color Res. Appl. 2008, 33, 485–493. [Google Scholar]
  4. H. Haneishi, T. Hasegawa, A. Hosoi, Y. Yokoyama, N. Tsumura, and Y. Miyake, System design for accurately estimating the spectral reflectance of art paintings, Appl. Opt. 2000, 39, 6621–6632.
  5. N. Shimano, Recovery of spectral reflectances of objects being imaged without prior knowledge, IEEE Trans. Image Process 2006, 15, 1848–1856. [CrossRef]
  6. P. Stigell, K. Miyata, and M. Hauta-Kasari, Wiener estimation method in estimating of spectral reflectance from RGB image, Pattern Recogn. Image Anal. 2007, 17, 233–242.
  7. H. L. Shen, P.-Q. Cai, S.-J. Shao, and J. H. Xin, Reflectance reconstruction for multispectral imaging by adaptive Wiener estimation, Opt. Express 2007, 15, 15545–15554. [CrossRef]
  8. Y. Murakami, K. Fukura, M. Yamaguchi, and N. Ohyama, Color reproduction from low-SNR multispectral images using spatiospectral Wiener estimation, Opt. Express 2008, 16, 4106–4120. [CrossRef]
  9. P. Urban, M. R. Rosen, and R. S. Berns, Spectral image reconstruction using an edge preserving spatio-spectral Wiener estimation, J. Opt. Soc. Am. A 2009, 26, 1865–1875. [CrossRef]
  10. S. Peyvandi, S. H. Amirshahi, J. Hernández-Andrés, J. L. Nieves, and J. Romero, Generalized inverse-approach model for spectral-signal recovery, IEEE Trans. Image Process 2013, 22, 501–510. [CrossRef]
  11. J. H. Yoo, D. C. Kim, H. G. Ha, and Y. H. Ha, Adaptive spectral reflectance reconstruction method based on Wiener estimation using a similar training set, J. Imaging Sci. Technol. 2016, 60, 020503.
  12. M. Nahavandi, Noise segmentation for improving performance of Wiener filter method in spectral reflectance estimation, Color Res. Appl. 2018, 43, 341–348. [CrossRef]
  13. S. Tominaga, S. Nishi, R. Ohtera, and H. Sakai, Improved method for spectral reflectance estimation and application to mobile phone cameras, J. Opt. Soc. Am. A 2022, 39, 494–508. [CrossRef]
  14. S. Tominaga, H. Sakai, Spectral reflectance estimation from camera responses using local optimal dataset, Journal of Imaging, 2023, 9, 1–18. [CrossRef]
  15. W. F. Zhang, P. Yang, D. Q. Dai, and A. Nehorai, Reflectance estimation using local regression methods, Lecture Notes in Computer Science (LNCS), 2012, 7367, 116–122.
  16. E. M. Valero, J. L. Nieves, Se´rgio M. C. Nascimento, K. Amano, D. H. Foster, Recovering spectral data from natural scenes with an RGB digital camera and colored filters, Color Res. Appl. 2007, 32, 352–360.
  17. R. M. H. Nguyen, D. K. Prasad, and M. S. Brown, Training-based spectral reconstruction from a single RGB image, European Conference on Computer Vision 2014, 186–201. [CrossRef]
  18. J. Liang and X. Wan, Optimized method for spectral reflectance reconstruction from camera responses, Opt. Express 2017, 25, 28273–28287. [CrossRef]
  19. W. F. Zhang and D. Q. Dai, Spectral reflectance estimation from camera responses by support vector regression and a composite model, J. Opt. Soc. Am. A 2008, 25, 2286–2296. [CrossRef]
  20. F. Deger, A. Mansouri, M. Pedersen, J. Y. Hardeberg, and Y. Voisin, Multi- and single-output support vector regression for spectral reflectance recovery, International Conference on Signal Image Technology and Internet Based Systems 2012, 805-809.
  21. V. Heikkinen, C. Camara, T. Hirvonen, and N. Penttinen, Spectral imaging using consumer-level devices and kernel-based regression, J. Opt. Soc. Am. A 2016, 33, 1095–1110. [CrossRef]
  22. V. Heikkinen, Spectral reflectance estimation using Gaussian processes and combination kernels, IEEE Trans. Image Process. 2018, 27, 3358–3373. [CrossRef]
  23. K. Cuan, D. Lu, and W. Zhang, Spectral reflectance reconstruction with the locally weighted linear model, Opt. Quantum Electron. 2019, 51, 1–12. [CrossRef]
  24. J. Liang, K. Xiao, M. R. Pointer, X.Wan, and C. Li, Spectra estimation from raw camera responses based on adaptive local-weighted linear regression, Opt. Express 2019, 27, 5165–5180. [CrossRef]
  25. L. Wang, X. Wan, G. Xia, and J. Liang, Sequential adaptive estimation for spectral reflectance based on camera responses, Opt. Express 2020, 28, 25830–25842. [CrossRef]
  26. Arad and O. Ben-Shahar, Sparse recovery of hyperspectral signal from natural RGB images, European Conference on Computer Vision 2016, 19–34.
  27. Y. Fu, Y. Zheng, L. Zhang, and H. Huang, Spectral reflectance recovery from a single RGB image, IEEE Trans. Comput. Imaging 2018, 4, 382–394. [CrossRef]
  28. Akanuma and D. Stamate, Neural network approach to estimating color reflectance with product independent models, Lecture Notes in Computer Science (LNCS), 2022, 13531, 803–806, Springer Nature Switzerland,. [CrossRef]
  29. Q. Pan, P. Katemake, and S. Westland, Neural networks for transformation to spectral spaces, 3rd Conference of the Asis Color Association, 2016, 125-128.
  30. J. Zhang, Y. Meuret, X. Wang, and K. A. G. Smet, Improved and robust spectral reflectance estimation, LEUKOS, 2020, 17, 359–379.
  31. https://jp.mathworks.com/help/deeplearning/ref/feedforwardnet.
Figure 1. Conceptual diagram of our image acquisition system.
Figure 1. Conceptual diagram of our image acquisition system.
Preprints 114837 g001
Figure 2. Database of surface-spectral reflectance.
Figure 2. Database of surface-spectral reflectance.
Preprints 114837 g002
Figure 3. Architecture of the feedforward neural network with a structure of m-N-n.
Figure 3. Architecture of the feedforward neural network with a structure of m-N-n.
Preprints 114837 g003
Figure 4. Overall flow of the proposed method for estimating spectral reflectance in three steps.
Figure 4. Overall flow of the proposed method for estimating spectral reflectance in three steps.
Preprints 114837 g004
Figure 5. Relative RGB spectral sensitivity functions of the Apple iPhone 6s.
Figure 5. Relative RGB spectral sensitivity functions of the Apple iPhone 6s.
Preprints 114837 g005
Figure 6. Spectral power distributions of seven LED light sources used in current experiments.
Figure 6. Spectral power distributions of seven LED light sources used in current experiments.
Preprints 114837 g006
Figure 7. Estimation results of the spectral reflectances for the 24 color checkers when applying the proposed method to the observations using the iPhone 6s. The bold and broken curves indicate, respectively, the estimated and directly measured spectral reflectances for the 24 color checkers.
Figure 7. Estimation results of the spectral reflectances for the 24 color checkers when applying the proposed method to the observations using the iPhone 6s. The bold and broken curves indicate, respectively, the estimated and directly measured spectral reflectances for the 24 color checkers.
Preprints 114837 g007
Figure 8. Comparison of the average RMSEs between the proposed method and the other methods. The symbols of Wiener, LMMSE, L_Wiener, L_LMMSE, Lp, and Qp represent the six estimation methods of (1) original Wiener, (2) original LMMSE [13], (3) local Wiener, (4) local LMMSE, (5) linear programming, and (6) quadratic programming [14], respectively.
Figure 8. Comparison of the average RMSEs between the proposed method and the other methods. The symbols of Wiener, LMMSE, L_Wiener, L_LMMSE, Lp, and Qp represent the six estimation methods of (1) original Wiener, (2) original LMMSE [13], (3) local Wiener, (4) local LMMSE, (5) linear programming, and (6) quadratic programming [14], respectively.
Preprints 114837 g008
Figure 9. Estimation results of the spectral reflectances for the 24 color checkers when applying the network method (2) to the observations using the iPhone 6s, without using the local optimal dataset. The bold and broken curves indicate, respectively, the estimated and directly measured spectral reflectances for the 24-color checkers.
Figure 9. Estimation results of the spectral reflectances for the 24 color checkers when applying the network method (2) to the observations using the iPhone 6s, without using the local optimal dataset. The bold and broken curves indicate, respectively, the estimated and directly measured spectral reflectances for the 24-color checkers.
Preprints 114837 g009
Figure 10. Comparison of the average RMSEs between the proposed method and the other methods when using iPhone 8.
Figure 10. Comparison of the average RMSEs between the proposed method and the other methods when using iPhone 8.
Preprints 114837 g010
Figure 11. Comparison of the average RMSEs between the proposed method and the other methods when using Huawei P10 lite.
Figure 11. Comparison of the average RMSEs between the proposed method and the other methods when using Huawei P10 lite.
Preprints 114837 g011
Table 1. Comparison of the average RMSEs in reflectance estimation when using random numbers with two different distributions and using three different cameras.
Table 1. Comparison of the average RMSEs in reflectance estimation when using random numbers with two different distributions and using three different cameras.
iPhone 6s iPhone 8 Huawei P10 lite
Uniform distribution 0.0173 0.0144 0.0281
Chi-square distribution 0.0195 0.0158 0.0296
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated