Abstract.
In this paper we are concerned with the existence of optimal stationary policies for infinite-horizon risk-sensitive Markov control processes with denumerable state space, unbounded cost function, and long-run average cost. Introducing a discounted cost dynamic game, we prove that its value function satisfies an Isaacs equation, and its relationship with the risk-sensitive control problem is studied. Using the vanishing discount approach, we prove that the risk-sensitive dynamic programming inequality holds, and derive an optimal stationary policy.
Article PDF
Avoid common mistakes on your manuscript.
Author information
Authors and Affiliations
Additional information
Accepted 1 October 1997
Rights and permissions
About this article
Cite this article
Hernández-Hernández, D., Marcus, S. Existence of Risk-Sensitive Optimal Stationary Policies for Controlled Markov Processes . Appl Math Optim 40, 273–285 (1999). https://doi.org/10.1007/s002459900126
Issue Date:
DOI: https://doi.org/10.1007/s002459900126