In this paper, we study the problem of finding deterministic (also known as feedback or closed-loop) Markov Nash equilibria for a class of discrete-time stochastic games. In order to establish our results, we develop a potential game approach based on the dynamic programming technique. The identified potential stochastic games have Borel state and action spaces and possibly unbounded nondifferentiable cost-per-stage functions. In particular, the team (or coordination) stochastic games and the stochastic games with an action independent transition law are covered.
The main objective of this paper is to find structural conditions under which a stochastic game between two players with total reward functions has an ϵ-equilibrium. To reach this goal, the results of Markov decision processes are used to find ϵ-optimal strategies for each player and then the correspondence of a better answer as well as a more general version of Kakutani's Fixed Point Theorem to obtain the ϵ-equilibrium mentioned. Moreover, two examples to illustrate the theory developed are presented.