Preprint
Article

Self-Disclosure to a Robot: Only for Those Who Suffer the Most

This version is not peer-reviewed.

Submitted:

12 May 2021

Posted:

13 May 2021

You are already at the latest version

A peer-reviewed article of this preprint also exists.

Abstract
Social robots may become an innovative means to improve the well-being of individuals. Earlier research showed that people easily self-disclose to a social robot even in cases where that was unintended by the designers. We report on an experiment of self-disclosing in a diary journal or to a social robot after negative mood induction. The off-the-shelf robot was complemented with our inhouse developed AI chatbot and could talk about ‘hot topics’ after having it trained with thousands of entries on a complaint website. We found that people who felt strong negativity after being exposed to shocking video footage benefited the most from talking to our robot rather than writing down their feelings. For people less affected by the treatment, a confidential robot chat or writing a journal page did not differ significantly. We discuss emotion theory in relation to robotics and possibilities for an application in design (the emoji-enriched ‘talking stress ball’). We also underline the importance of - otherwise disregarded - outliers in a data set that is of a therapeutic nature.
Keywords: 
Subject: 
Engineering  -   Automotive Engineering
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Alerts
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2025 MDPI (Basel, Switzerland) unless otherwise stated