Article
Version 1
Preserved in Portico This version is not peer-reviewed
Exploratory Dividend Optimization with Entropy Regularization
Version 1
: Received: 28 November 2023 / Approved: 28 November 2023 / Online: 28 November 2023 (10:23:46 CET)
A peer-reviewed article of this Preprint also exists.
Hu, S.; Zhou, Z. Exploratory Dividend Optimization with Entropy Regularization. Journal of Risk and Financial Management 2024, 17, 25, doi:10.3390/jrfm17010025. Hu, S.; Zhou, Z. Exploratory Dividend Optimization with Entropy Regularization. Journal of Risk and Financial Management 2024, 17, 25, doi:10.3390/jrfm17010025.
Abstract
This paper studies the dividend optimization problem in the entropy regularization framework by following the same continuous-time reinforcement learning setting as in Wang et al. (2020). The exploratory HJB is established and the optimal exploratory dividend policy is a truncated exponential distribution. We show that, for suitable choices of the maximal dividend paying rate and the temperature parameter, the value function of the exploratory dividend optimization problem could be significantly different from the value function in the classical dividend optimization problem. In particular, the value function of the exploratory dividend optimization problem could be classified into three cases based on its monotonicity. Numerical examples are also presented to show the impact of temperature parameter on the solution.
Keywords
Dividend optimization; entropy regularization; distributional control; exploratory HJB
Subject
Business, Economics and Management, Finance
Copyright: This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comments (0)
We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.
Leave a public commentSend a private comment to the author(s)
* All users must log in before leaving a comment