Preprint
Article

Prediction of Hedonic Ratings of Different Drinks Based on Facial Expressions

Altmetrics

Downloads

89

Views

38

Comments

0

A peer-reviewed article of this preprint also exists.

Submitted:

11 August 2023

Posted:

14 August 2023

You are already at the latest version

Alerts
Abstract
Previous studies have indicated that facial expressions can serve as an objective evaluation method for hedonics (overall pleasure) of food and beverages. In this study, we aimed to validate the findings of our previous research, which demonstrated that facial expressions induced by tastants can predict the perceived hedonic ratings of these tastants. Facial expressions of 29 female participants (aged 18-55 years) were recorded using a digital camera while they consumed 12 different concentrations of solutions representing five basic tastes. The facial expressions were then analyzed using the widely-used facial expression analysis application, FaceReader, to identify seven emotions (surprise, happiness, scare, neutral, disgust, sadness, and anger) with scores ranging from 0 to 1. Participants also rated the hedonics of each solution on a scale from -5 (extremely unpleasant) to +5 (extremely pleasant). A multiple linear regression analysis was conducted to develop a formula to predict perceived hedonic ratings. The formula's applicability was tested by analyzing emotion scores for 11 additional taste solutions consumed by 20 other participants. The predicted hedonic ratings demonstrated good correlation and concordance with the perceived ratings, supporting the validity of our previous findings using different software and taste stimuli among diverse participants.
Keywords: 
Subject: Biology and Life Sciences  -   Food Science and Technology

Introduction

Taste evokes strong hedonic responses, encompassing pleasure or unpleasure, and distinct qualities like sweetness, sourness, saltiness, bitterness, and umami. Previous research suggests that facial expressions are implicit indicators of hedonics for tastes [1,2]. To objectively evaluate taste perception, several studies have utilized facial expression analysis induced by food consumption [3,4,5,6,7,8,9,10,11]. In our 2021 paper [12], we demonstrated the potential of predicting the deliciousness of food and beverages through facial expression analysis. Our approach employed facial expression analysis as the dependent variable, quantifying expressions of neutral, happiness, sadness, surprise, scare, disgust, and anger for each of the five basic tastes (sweetness, saltiness, sourness, bitterness, and umami) in solutions. As an independent variable, we used participants' perceived hedonic ratings (subjective sensory evaluations) for each taste. Through multiple regression analysis, we derived a regression equation for predicting hedonic ratings. Subsequent validation involved substituting facial expression analysis results from different subjects consuming various taste solutions, including commercially available beverages, into the equation. A strong correlation and consistency were observed between the calculated values and the subjects' actual sensory evaluation values, confirming facial expression analysis as a valuable method for objectively evaluating taste perception.
However, the study identified three limitations: 1) the small sample size, 2) the limited generalizability of the AI application used for analysis, which was available locally but not worldwide, and 3) reliance on a single selected facial expression image chosen by the experimenter (one-shot single image). Thus, this study aimed to address these issues: 1) increase the number of subjects to at least double the previous study, 2) use widely-used facial expression analysis software, FaceReader, and 3) examine the usefulness of various methods, including data analysis using average facial expression values over a certain period (in addition to one-shot analysis). The objective was to reconfirm, with these improvements, the prediction of food and beverage deliciousness or unpleasantness based on facial expressions. This study offers a practical method for evaluating the hedonics of different edibles using facial expressions.
フォームの始まり

Materials and methods

Participants

A total of 49 nonsmoking participants were recruited from among the students and staffs in Kio University, Nara, Japan. Based on the responses to a questionnaire conducted before the experiment, we judged that all participants were free of sensory, eating, neurological, and psychiatric disorders, and none were using any medications that would interfere with taste. All participants were instructed to refrain from eating or drinking from 1 hour before the start of the experiment. After providing an explanation of the purpose and safety of the experimental protocol, informed consent was obtained from all participants. This study was approved by the Kio University ethics committee (No. R2-31), and all experiments were conducted in accordance with the principles set forth in the Declaration of Helsinki.

Experiment 1

Since the experimental procedure was essentially the same as that of our previous study [12], it will be explained briefly. A group of 29 healthy female volunteers (age range, 18–55 years; mean ± SD, 23.1 ± 7.9) participated in an experiment to examine the utility of AI for the analysis of facial expressions and establish a formula to predict hedonic ratings. The experiment was conducted by two researchers in our lab for each of the participants. Taste solutions used for 16 of 29 participants were 10 kinds of five conventional basic tastes with different concentrations: 2.5%, 5%, 10% and 20% sucrose; 0.5% and 2% monosodium glutamate (MSG); 1% citric acid; 1% and 5% sodium chloride (NaCl); and 0.01% quinine hydrochloride (QHCl) and for another 13 participants: 5% glucose, 0.3% sodium guanylate and 3% NaCl. All the solutions were made up with distilled water (DW). An aliquot of 10 mL of taste solution in a small paper cup was placed on a table just in front of the sitting participant. The participant poured the 10 mL of liquid into the mouth, held it for about 1 sec, and then swallowed. The participant was asked to show facial expressions freely but not intentionally, and to make a brief remark about the quality and/or palatability of the stimulus soon after recognition. After drinking the solution, the mouth was rinsed with DW. The task was repeated with the inter-stimulus interval of at least 2 minutes. Each of the stimuli was delivered randomly. The participant was also asked to evaluate the overall hedonic rating of the stimulus on a scale from ‒5 (extremely unpleasant) to +5 (extremely pleasant), with 0 being neutral, before the start of the next tasting.
Another experimenter (the “recorder”) sat near the participant, delivered a signal to start drinking, and recorded a video focusing on the face of the participant using a digital camera (Cyber-shot DSC-WX350; Sony Corp. Tokyo, Japan), which was set 2 m in front of the participant, who was asked to look directly at the camera.
After the experiment, the video replay was analyzed using the AI application FaceReader (ver. 8.1; Noldus Information Technology, Wageningen, The Netherlands). FaceReader processes facial expressions frame-by-frame at 30 Hz and classifies them into seven emotions (neutral, happy, sad, angry, surprised, scared, and disgusted) with scores ranging from 0 (no visible emotion) to 1 (emotion fully present). We conducted score analyses based on four different methods: 1) A single selected facial expression image chosen by the experimenter (one-shot image) judged to be the most applicable facial expression for the taste stimulation. 2) The average emotion scores for 2 seconds with the one-shot image in the middle (one-shot ± 1 sec, or 2-sec image). 3) The average emotion scores for 4 seconds (one-shot ± 2 sec, or 4-sec image). 4) The average emotion scores for 6 seconds (one-shot ± 3 sec, or 6-sec image).
Any part overlapping with a subject's brief remark was excluded from the analysis, and the mean value was calculated from the remaining analysis time.
In the next step, we performed multiple linear regression analysis to predict hedonic ratings based on the seven emotions. The calculation was based on the scores of the seven emotions obtained from 13 stimuli in 29 participants, treated as the dependent variable. The independent variable was the participants' perceived (self-reported) hedonic ratings for each stimulus. Through this multiple regression analysis, we derived a regression equation for predicting hedonic ratings.
フォームの始まり

Experiment 2

Another group of 20 healthy volunteers (19 females and 1 male, age range, 20–50 years; mean ± SD, 22.9 ± 6.3) participated in a second experiment to examine and confirm the applicability of the formulae obtained in Experiment 1 for predicting hedonic ratings. None of the participants in Experiment 2 had participated in Experiment 1. The stimuli used were the following 11 liquid: natural mineral water (ILOHAS, Coca-Cola Bottlers Japan Inc., Tokyo, Japan), 1 % malic acid, 2 % monopotassium glutamate (MPG), 0.003 % sucrose octa acetate (SOA), 7 % calorie-free sweetener (Palsweet, Ajinomoto Co. Inc. Tokyo, Japan), peach juice (Peach Mix 100 %, Dole Japan, Inc., Tokyo, Japan), noodle broth (Mentsuyu, Daitoku Food Co., Ltd., Nara, Japan), vegetable juice (Thick Vegetable Juice, Kagome Co. Ltd., Tokyo, Japan), 2.5 % salt (Hakata-no-Shio, Hakata Salt Co., Ltd., Ehime, Japan), flat lemon juice (Shikwasa juice, Okinawa Aloe Co. Ltd., Okinawa, Japan) and catechin green tea (Healthya Green Tea, Kao Corp., Tokyo, Japan).
Liquid intake, video taking, FaceReader analysis, and the rating of perceived hedonics were the same as those employed in Experiment 1. FaceReader outputs for the emotions of facial expressions in response to these stimuli were put in the corresponding emotions in the formulae derived in Experiment 1 to obtain predicted (or calculated) hedonic ratings. The relationships between the predicted and perceived hedonic ratings were then examined and compared.

Data analysis

In Experiment 1, the boxplot analysis of the scores for the seven emotions associated with each of the 10 stimuli in 16 participants was conducted, and the median and interquartile range with minimum and maximum scores were obtained. To examine the similarity of hedonics among taste stimuli, Spearman’s correlation coefficients were calculated between pairs of stimuli based on the scores of seven emotions. A multiple linear regression analysis was then conducted based on the scores for the seven emotions (the dependent variables) obtained to 13 stimuli in 29 participants and the participants’ perceived hedonic ratings (the independent variables) for each stimulus. The possible existence of multicollinearity, which occurs when predictors provide redundant information because of a high correlation with each other, was examined by calculating the correlation coefficients among pairs of seven emotions. In Experiment 2, relationships between the predicted and perceived hedonic ratings were examined and compared using Pearson’s and Spearman’s correlation coefficients, a one-way ANOVA and Wilcoxon signed-rank test. Before correlation analyses, data were checked if they showed normal distribution or not by Shapiro-Wilk’s test. All statistical analyses were performed using IBM SPSS Statistics (ver. 25) and Excel Statistics 2012. P values < 0.05 were considered statistically significant.

Results

Experiment 1

The scores for seven emotions, neutral, happy, sad, angry, surprised, scared and disgusted, from the FaceReader outputs for one-shot images taken after the presentation of taste stimuli are shown in Figure 1. The FaceReader output contains ‘contempt’, but in the present study this term was omitted because contempt is not related with the taste evaluation and actually no participants showed this emotion to any taste stimuli tested. Figure 1 denotes the hedonic pattern profiles across the seven different emotions in response to 10 taste stimuli in 16 participants. The panels are arranged according to the mean perceived hedonic rating for each stimulus in 16 participants.
Figure 2 shows the boxplot analysis of the scores for the seven emotions shown in Figure 1. The median and interquartile range with minimum and maximum scores are illustrated for the emotions evoked by the 10 stimuli. The profiles of scores for the seven emotions suggested that higher concentrations (10% and 20%) of sucrose showed a strong happiness component, whereas the sadness component was the largest for 1% citric acid, 5% NaCl, and 0.01% QHCl; the neutral component was the largest for the remaining stimuli.
Figure 3 shows the calculated correlation coefficients among profiles of emotional scores for 10 stimuli. We used Spearman’s analysis because scores did not show normal distribution in some of the stimuli. Statistically highly significant correlations were detected in three groups. The first group consisted of 20%, 10% and 5% sucrose (Figure 2-A to C, respectively) with highly positive perceived hedonic ratings. The second group consisted of 2% MSG, 0.5% MSG and 1% NaCl (Figure 2-E to G, respectively) with nearly neutral to slightly negative perceived ratings. The third group consisted of 1% citric acid, 5% NaCl and 0.01% QHCl (Figure 2-H to J, respectively) with highly negative hedonic ratings. It is noted here that 1% NaCl is also correlated well with 1% citric acid and 5% NaCl in their emotional scores (Figure 2-G to I, respectively).
A multiple linear regression analysis was performed to predict hedonic ratings based on the scores of the seven emotions for one-shot images obtained in Experiment 1. In addition to the data obtained in 16 participants as shown in Figure 1 and Figure 2, the data in 13 participants tested with 5% glucose, 0.3% sodium guanylate and 3% NaCl were combined. Before the analysis, multicollinearity was ensured by examining the correlation coefficients among the seven emotions. No statistically significant (P < 0.05) correlation was found for any pair of the seven emotions. As a result, we obtained the following regression formula for one-shot images [F (7, 191) = 3.513, P < 0.001 with an adjusted R2 of 0.614]:
Hedonic rating = 4.914 × surprise + 7.568 × happiness + 0.617 × scare + 4.393 × neutral – 1.402 × disgust – 2.414 × sadness – 3.917 × angry – 3.234
, where happiness (P < 0.001), neutral (P < 0.01), sadness (P < 0.05) and surprise (P < 0.05) were significant predictors of hedonic ratings.
For 2-sec images, a regression formula [F (7, 191) = 4.781, P < 0.001 with an adjusted R2 of 0.474] was obtained for the mean emotion scores:
Hedonic rating = 6.701 × surprise + 7.461 × happiness + 1.657 × scare + 3.701 × neutral – 1.776 × disgust – 2.563 × sadness – 3.488 × angry – 3.007
, where happiness (P < 0.001), neutral (P < 0.05) and surprise (P < 0.05) were significant predictors of hedonic ratings, and sadness was marginally significant (P = 0.061).
For 4-sec images, a regression formula [F (7, 191) = 5.788, P < 0.001 with an adjusted R2 of 0.365] was obtained for the mean emotion scores:
Hedonic rating = 8.276 × surprise + 7.401 × happiness + 0.452 × scare + 3.343 × neutral – 2.022 × disgust – 2.126 × sadness – 1.155 × angry – 3.008
, where happiness (P < 0.001) and surprise (P < 0.05) were significant predictors of hedonic ratings, and neutral was marginally significant (P = 0.075).
For 6-sec images, a regression formula [F (7, 191) = 6.236, P < 0.001 with an adjusted R2 of 0.316] was obtained for the mean emotion scores:
Hedonic rating = 9.714 × surprise + 7.202 × happiness – 1.935 × scare + 3.284 × neutral – 2.201 × disgust – 1.558 × sadness – 1.268 × angry – 3.030
, where happiness (P < 0.001) and surprise (P < 0.01) were significant predictors of hedonic ratings.

Experiment 2

The validity of these formulae was examined by applying the obtained emotion scores to another 11 taste stimuli in a different group of 20 participants who had not been exposed to the formulae in Experiment 1. We investigated the correlation between the estimated ratings and the perceived ratings, as well as how well the estimated ratings matched the perceived ratings for each taste stimulus. In four participants, however, there was an apparent discrepancy between the estimated ratings and the perceived ratings, mainly due to the lack of happiness reported for hedonically positive stimuli such as peach juice and noodle stock (Figure 4-A). Therefore, the subsequent analyses focused on the remaining 16 participants who exhibited a strong correlation and good agreement between the estimated and perceived ratings (Figure 4-B). However, a significant difference was observed between the two sets of ratings for the highly palatable peach juice and the highly aversive SOA (Wilcoxon signed-rank test, P < 0.01). This difference in ratings for SOA and peach juice may have been attributed to the limitation of the predicted ratings, which could not reach the maximum hedonic ratings of either –5 or +5. To address this issue, the calculated ratings for peach juice and SOA were multiplied by 1.6. This coefficient was determined based on the ratio between the perceived and calculated ratings for peach juice (4.1/2.6 = 1.58) and SOA (-4.5/-2.7 = 1.67). After this adjustment, the estimated ratings aligned much better with the perceived ratings, with the slope of the regression line improving from 0.697 to 0.891, respectively (Figure 4-C).
In Figure 5, the correlation analysis between the mean perceived ratings and mean predicted ratings was depicted, which were calculated using the regression formulae based on the mean emotional scores for different analysis periods: one-shot ± 1 sec (2-sec image) (Figure 5-A), one-shot ± 2 sec (4-sec image) (Figure 5-B), and one-shot ± 3 sec (6-sec image) (Figure 5-C). We utilized Pearson’s correlation analysis since the data for both the calculated and perceived ratings exhibited a normal distribution (Kolmogorov-Smirnov’s test). Although the correlation coefficient slightly decreased from 0.977 to 0.970 to 0.962 as the analysis period increased from 2-sec to 4-sec to 6-sec images, respectively, the slope of the regression line changed from 0.618 to 0.534 to 0.495, indicating a decrease in concordance between the two sets of ratings with longer analysis time.
In addition to these correlation analyses, we examined the concordance between predicted and perceived ratings for each taste stimulus. The difference between the estimated and calculated ratings for each taste stimulus could serve as an indicator of the level of agreement between the two ratings. The mean difference of ratings for all 11 stimuli was calculated to be 0.606, 0.722, 0.953, and 1.027 for one-shot, 2-sec, 4-sec, and 6-sec images, respectively. One-way ANOVA showed a significant main effect [F (3,30) = 13.702, P < 0.001], and post hoc analysis using the Bonferroni test revealed that the difference in ratings for one-shot images was not significantly different from that of 2-sec images but was significantly smaller (P < 0.001) than that of 4-sec and 6-sec images.
Finally, we summarized the time point at which the one-shot image was taken. As depicted in Figure 6, the time point varied depending on the perceived hedonic value of the stimulation, either before or after the participant's brief remark. One-shot images were more frequently taken after remarks for hedonically positive stimuli such as peach juice and noodle broth, while they were more frequently taken before remarks for hedonically negative stimuli such as 2.5% salt and 1% malic acid. This tendency was statistically significant when Pearson’s correlation coefficient was calculated between the number of one-shot images taken before the remarks and the perceived hedonic values (r = 0.830 and r = 0.973 when SOA was omitted). The same tendency was observed in the data from Experiment 1.
フォームの始まり

Discussion

The present study was designed to confirm the validity of our previous findings, which indicated that the analysis of facial expressions in response to tastants can predict the hedonic ratings of those tastants. Using one-shot images captured through different AI applications and presenting different taste stimuli to various participants, we obtained consistent results, revealing the following: 1) The five basic tastes could be classified into three hedonic categories: positive, neutral, or negative, based on AI analysis of facial expressions. 2) We established a formula for predicting hedonic ratings using multiple linear regression analysis, considering emotional facial expressions in response to basic taste stimuli. 3) By inputting emotional scores of facial expressions in response to different tastants from different participants into this formula, we found a strong correlation and concordance between predicted (or calculated) and perceived (or subjective) hedonic ratings. These results suggest that a single image of a person's face can quantitatively predict the extent to which that person enjoys food and beverages.
For this study, we utilized FaceReader, a widely-used, convenient, and accurate automated facial expression recognition system. FaceReader classifies facial expressions into the basic universal human emotions suggested by Ekman and Friesen [13], including happiness, sadness, anger, surprise, scare, disgust, and neutrality. The intensity of these emotions ranges from 0 to 1. The analyses of these emotions have been effectively employed in various experimental situations in food research [4,5,6,8,9,10,11]. In addition to analyzing one-shot (single) images, FaceReader can also provide time course data showing changes in each emotion after tasting. As a result, we analyzed the mean scores of emotions during one-shot image ± 1 sec, one-shot ± 2 sec, and one-shot ± 3 sec. Statistical analysis demonstrated that good prediction of hedonic ratings was achieved for one-shot and one-shot ± 1 sec images, while one-shot images ± 2 sec and ± 3 sec exhibited lower predictability. The one-shot image is captured at the time point when a third person judges it to represent the most dominant facial expression elicited by the intake of the tastant. These findings indicate that a time difference within 2 seconds (one-shot ± 1 sec) is acceptable for accurately estimating hedonic ratings.
We asked each participant to make a short remark after the oral intake of tastants. An interesting finding was that one-shot images for aversive tastants tended to appear before the remarks, while those for palatable tastants tended to appear after the remarks. These characteristic differences were proven to be statistically significant, reflecting that aversive tastes convey warning messages in the form of discomfort, harm, and urgency. Good tastes are pleasant, palatable, and nutritive; we enjoy them slowly, comfortably, and with emotional fulfillment.
However, bitter-tasting SOA elicited one-shot images both before and after remarks evenly, despite being the most aversive stimulus. This may be explained by the fact that bitter stimuli stimulate taste cells in the foliate and circumvallate papillae situated at the back of the tongue better than those in the anterior tongue [14,15]. About half of the participants felt a stronger bitter taste after remarks at the timing for the bitter substance to reach the posterior tongue.
In our previous study, the AI application used displayed sadness exclusively rather than disgust emotions for facial expressions induced by aversive taste stimuli, such as 5% NaCl, 0.01% citric acid, and 0.01% QHCl. Only "happiness" was a significant predictor of hedonically positive ratings. We discussed in the previous study [12] that these results might be dependent on the AI application used. Facial emotions and scores would be classified differently with different accuracies by a different algorithm [16]. However, as shown in the present study, essentially the same results were obtained by the analysis using FaceReader. The dominant appearance of happiness and sadness in these results may be related to a recent study on an emotion recognition test by Wang et al. [17], who reported that happiness and sadness are unique and independent among the emotions.
There are at least two limitations in predicting hedonic ratings by facial expressions: 1) The prediction is not successful for individuals who show no or very small expressions of happiness to very palatable tastants, e.g., peach juice and noodle stock, even though these people can express aversive emotions to unpalatable tastants, e.g., malic acid and SOA. In the present study, 4 participants out of 20 belonged to this category of individuals (see Figure 4-A). 2) A significant difference was detected between the predicted and perceived ratings for the very aversive SOA and the very palatable peach juice. Such a difference between the two ratings for SOA and peach juice may have been due to the limitation of the predicted ratings in reaching the maximum hedonic ratings, such as –5 to +5. This phenomenon was similarly detected in our previous study. To address this issue, the following procedure may be effective: If the estimated rating for a tastant, whose perceived rating is larger than 4.0 or smaller than -4.0, is multiplied by 1.6, the compensated ratings yield calculated intensities that become very close to the perceived intensity, as proven in the present study (see Figure 4-C).
Regardless of these limitations, the present study has confirmed the validity of our previous findings, showing that hedonic ratings can be well predicted by a formula derived from multiple regression analysis of facial expressions obtained using AI software. Another important point is that this procedure is straightforward, and the outcomes from the AI application are generated rapidly using a single image. This technique could be expected to be utilized as an implicit evaluation method in conjunction with explicit sensory tests in various situations, such as consumer surveys and new product evaluations.

Author Contributions

Conceptualization, T.Y. and Y.M.; Data curation, Y.M. and K.U.; Methodology, Y.M., K.U. and T.Y.; Writing – original draft, Y.M.; Writing – review & editing, T.Y.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Ethics Committee of Kio University (No. R2-31, 25 April, 2020).

Data Availability

Data are available on reasonable request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Steiner, J.E. The gustofacial response: Observation on normal and anencephalic newborn infants. In Symposium on Oral Sensation and Perception-IV; Bosma, J.F., Ed.; NIH-DHEW: Bethesda, 1973; pp. 254–278. [Google Scholar]
  2. Steiner, J.E.; Glaser, D.; Hawilo, M.E.; Berridge, K.C. Comparative expression of hedonic impact: affective reactions to taste by human infants and other primates. Neurosci Biobehav Rev. 2001, 25, 53–74. [Google Scholar] [CrossRef] [PubMed]
  3. Torrico, D.D.; Fuentes, S.; Gonzalez, V.C.; Ashman, H.; Dunshea, F.R. Cross-cultural effects of food product familiarity on sensory acceptability and non-invasive physiological responses of consumers. Food Res Int. 2019, 115, 439–450. [Google Scholar] [CrossRef] [PubMed]
  4. Bartkiene, E.; Steibliene, V.; Adomaitiene, V.; Juodeikiene, G.; Cernauskas, D.; Lele, V.; et al. Factors affecting consumer food preferences: food taste and depression-based evoked emotional expressions with the use of face reading technology. Biomed Res Int. 2019, 2097415. [Google Scholar] [CrossRef] [PubMed]
  5. de Wijk, R. A.; He, W.; Mensink, M.G.; Verhoeven, R.H.; de Graaf, C. ANS responses and facial expressions differentiate between the taste of commercial breakfast drinks. PLOS ONE. 2014, 9, e93823. [Google Scholar] [CrossRef] [PubMed]
  6. Danner, L.; Haindl, S.; Joechl, M.; Duerrschmid, K. Facial expressions and autonomous nervous system responses elicited by tasting different juices. Food Res Int. 2014, 64, 81–90. [Google Scholar] [CrossRef] [PubMed]
  7. He, W.; Boesveldt, S.; Delplanque, S.; de Graaf, C.; de Wijk, R.A. Sensory-specific satiety: Added insights from autonomic nervous system responses and facial expressions. Physiol Behav. 2017, 170, 12–18. [Google Scholar] [CrossRef] [PubMed]
  8. Samant, S.S; Chapko, M.J.; Seo, H, -S. Predicting consumer liking and preference based on emotional responses and sensory perception: A study with basic taste solutions. Food Res Int. 2017, 100, 3295–334. [Google Scholar] [CrossRef] [PubMed]
  9. Zhi, R.; Cao, L.; Cao, G. Asians' facial responsiveness to basic tastes by automated facial expression analysis system. J Food Sci. 2017, 82, 794–806. [Google Scholar] [CrossRef] [PubMed]
  10. Zhi, R.; Wan, J.; Zhang, D.; Li, W. Correlation between hedonic liking and facial expression measurement using dynamic affective response representation. Food Res Int. 2018, 108, 237–245. [Google Scholar] [CrossRef] [PubMed]
  11. Kaneko, D.; Hogervorst, M.; Toet, A.; van, Erp. JBF.; Kallen, V.; Brouwer, A.M. Explicit and implicit responses to tasting drinks associated with different tasting experiences. Sensors (Basel) 2019, 19, 4397. [Google Scholar] [CrossRef] [PubMed]
  12. Yamamoto, T.; Mizuta, H.; Ueji, K. Analysis of facial expressions in response to basic taste stimuli using artificial intelligence to predict perceived hedonic ratings. PLOS ONE. 2021, 16, e0250928. [Google Scholar] [CrossRef]
  13. Ekman, P.; Friesen, W.V. Constants across cultures in the face and emotion. J Pers Soc Psychol. 1971, 17, 124–129. [Google Scholar] [CrossRef] [PubMed]
  14. Pizarek, A.; Vickers, Z. Effects of swallowing and spitting on flavor intensity. J. Sensory. Stud. 2017, 32, e12277. [Google Scholar] [CrossRef]
  15. Running, C.A.; Hayes, J.E. Sip and spit or sip and swallow: Choice of method differentially alters taste intensity estimates across stimuli. Physiol. Behav. 2017, 181, 95–99. [Google Scholar] [CrossRef] [PubMed]
  16. Dupré, D.; Krumhuber, E.G.; Küster, D.; McKeown, G.J. A performance comparison of eight commercially available automatic classifiers for facial affect recognition. PLOS ONE. 2020, 15, e0231968. [Google Scholar] [CrossRef]
  17. Wang, Y.; Zhu, Z.; Chen, B.; Fang, F. ; Perceptual learning and recognition confusion reveal the underlying relationships among the six basic emotions. Cogn Emot 2019, 33, 754–767. [Google Scholar] [CrossRef]
Figure 1. Profiles of hedonic patterns across seven different emotions based on FaceReader analysis of facial expressions in response to 10 stimuli in 16 participants. The analysis was based on one-shot images. Profiles are depicted in different colors for the 16 participants. The emotions are arbitrarily arranged from left to right in the order of surprise (Su), happiness (H), scare (Sc), neutral (N), disgust (D), sadness (Sa) and anger (A). Panels are arranged from the most pleasant to the most unpleasant stimulus from A to J, respectively, as shown by the mean ± standard deviation perceived hedonic rating for each stimulus in 16 participants.
Figure 1. Profiles of hedonic patterns across seven different emotions based on FaceReader analysis of facial expressions in response to 10 stimuli in 16 participants. The analysis was based on one-shot images. Profiles are depicted in different colors for the 16 participants. The emotions are arbitrarily arranged from left to right in the order of surprise (Su), happiness (H), scare (Sc), neutral (N), disgust (D), sadness (Sa) and anger (A). Panels are arranged from the most pleasant to the most unpleasant stimulus from A to J, respectively, as shown by the mean ± standard deviation perceived hedonic rating for each stimulus in 16 participants.
Preprints 82221 g001
Figure 2. Box and whisker plot analysis of scores for seven different emotions evoked by 10 stimuli. The median and interquartile range with minimum and maximum scores are based on the emotional scores of the 16 participants shown in Figure 1. Small circles in each panel indicate outliers. Other descriptions are the same as those in Figure 1.
Figure 2. Box and whisker plot analysis of scores for seven different emotions evoked by 10 stimuli. The median and interquartile range with minimum and maximum scores are based on the emotional scores of the 16 participants shown in Figure 1. Small circles in each panel indicate outliers. Other descriptions are the same as those in Figure 1.
Preprints 82221 g002
Figure 3. Spearman’s correlation coefficient matrix of the hedonic profiles for 10 stimuli. Correlations were calculated between pairs of stimuli based on the mean scores of seven emotions shown in Figure 1. The taste stimuli were arranged in the order of the most positive to most negative perceived hedonic ratings from left to right and from top to bottom. Coefficients shaded indicate highly significant (P < 0.01).
Figure 3. Spearman’s correlation coefficient matrix of the hedonic profiles for 10 stimuli. Correlations were calculated between pairs of stimuli based on the mean scores of seven emotions shown in Figure 1. The taste stimuli were arranged in the order of the most positive to most negative perceived hedonic ratings from left to right and from top to bottom. Coefficients shaded indicate highly significant (P < 0.01).
Preprints 82221 g003
Figure 4. Scatterplots depicting the relationship between mean predicted (calculated) and perceived hedonic ratings for 11 stimuli among 20 participants. The mean calculated hedonic ratings are shown on the y-axis, and the perceived hedonic ratings are shown on the x-axis. Calculation was based on the emotion scores for one-shot images. Each graph includes the regression formula, Pearson’s correlation coefficient (r), coefficient of determination (r2), and the number of participants. A: Scatterplots for 4 participants who exhibited very low happiness scores. B: Scatterplots for the remaining 16 participants. C: Scatterplots for the 16 participants after the calculated ratings were multiplied by 1.6 for peach juice and SOA, as indicated in red. Following this adjustment, the slope of the regression line is close to 1.0, indicating a significantly improved agreement between the calculated and perceived ratings.
Figure 4. Scatterplots depicting the relationship between mean predicted (calculated) and perceived hedonic ratings for 11 stimuli among 20 participants. The mean calculated hedonic ratings are shown on the y-axis, and the perceived hedonic ratings are shown on the x-axis. Calculation was based on the emotion scores for one-shot images. Each graph includes the regression formula, Pearson’s correlation coefficient (r), coefficient of determination (r2), and the number of participants. A: Scatterplots for 4 participants who exhibited very low happiness scores. B: Scatterplots for the remaining 16 participants. C: Scatterplots for the 16 participants after the calculated ratings were multiplied by 1.6 for peach juice and SOA, as indicated in red. Following this adjustment, the slope of the regression line is close to 1.0, indicating a significantly improved agreement between the calculated and perceived ratings.
Preprints 82221 g004
Figure 5. Scatterplots depicting the relationship between mean predicted (calculated) and perceived hedonic ratings for 11 stimuli among 16 participants. The calculated hedonic ratings are displayed on the y-axis, and the perceived hedonic ratings are on the x-axis. Each graph includes the regression formula, Pearson’s correlation coefficient (r), and coefficient of determination (r2). A: Calculation based on the mean emotion scores for 2-sec images. B: Calculation based on 4-sec images. C: Calculation based on 6-sec images.
Figure 5. Scatterplots depicting the relationship between mean predicted (calculated) and perceived hedonic ratings for 11 stimuli among 16 participants. The calculated hedonic ratings are displayed on the y-axis, and the perceived hedonic ratings are on the x-axis. Each graph includes the regression formula, Pearson’s correlation coefficient (r), and coefficient of determination (r2). A: Calculation based on the mean emotion scores for 2-sec images. B: Calculation based on 4-sec images. C: Calculation based on 6-sec images.
Preprints 82221 g005
Figure 6. The distribution of one-shot images judged to represent the most relevant facial expression for each taste stimulus. The middle part displays the period of short remarks by participants regarding different tastants, arranged in order of the most palatable to the most aversive from top to bottom. Solid circles indicate the time points of the one-shot images: red circles represent points elicited before the start of the remark, while black circles represent those after the end of the remark. The size of the circle increases depending on the number of occurrences at the same time point. It is noteworthy that points tend to occur after the remark for the palatable stimuli, whereas before the remark for the aversive stimuli.
Figure 6. The distribution of one-shot images judged to represent the most relevant facial expression for each taste stimulus. The middle part displays the period of short remarks by participants regarding different tastants, arranged in order of the most palatable to the most aversive from top to bottom. Solid circles indicate the time points of the one-shot images: red circles represent points elicited before the start of the remark, while black circles represent those after the end of the remark. The size of the circle increases depending on the number of occurrences at the same time point. It is noteworthy that points tend to occur after the remark for the palatable stimuli, whereas before the remark for the aversive stimuli.
Preprints 82221 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated