In this paper there are considered Markov decision processes (MDPs) that have the discounted cost as the objective function, state and decision spaces that are subsets of the real line but are not necessarily finite or denumerable. The considered MDPs have a cost function that is possibly unbounded, and dynamic independent of the current state. The considered decision sets are possibly non-compact. In the context described, conditions to obtain either an increasing or decreasing optimal stationary policy are provided; these conditions do not require assumptions of convexity. Versions of the policy iteration algorithm (PIA) to approximate increasing or decreasing optimal stationary policies are detailed. An illustrative example is presented. Finally, comments on the monotonicity conditions and the monotone versions of the PIA that are applied to discounted MDPs with rewards are given.
In this paper conditions proposed in Flores-Hernández and Montes-de-Oca \cite{Flores} which permit to obtain monotone minimizers of unbounded optimization problems on Euclidean spaces are adapted in suitable versions to study noncooperative games on Euclidean spaces with noncompact sets of feasible joint strategies in order to obtain increasing optimal best responses for each player. Moreover, in this noncompact framework an algorithm to approximate the equilibrium points for noncooperative games is supplied.