We consider the optimal stopping problem for a discrete-time Markov process on a Borel state space X. It is supposed that an unknown transition probability p(⋅|x), x∈X, is approximated by the transition probability p˜(⋅|x), x∈X, and the stopping rule τ˜∗, optimal for p˜, is applied to the process governed by p. We found an upper bound for the difference between the total expected cost, resulting when applying \wtτ∗, and the minimal total expected cost. The bound given is a constant times \dpssupx∈X∥p(⋅|x)−\wtp(⋅|x)∥, where ∥⋅∥is the total variation norm.