In this article we propose a method of parameters estimation for the class of discrete stable laws. Discrete stable distributions form a discrete analogy to classical stable distributions and share many interesting properties with them such as heavy tails and skewness. Similarly as stable laws discrete stable distributions are defined through characteristic function and do not posses a probability mass function in closed form. This inhibits the use of classical estimation methods such as maximum likelihood and other approach has to be applied. We depart from the H-method of maximum likelihood suggested by Kagan (1976) where the likelihood function is replaced by a function called informant which is an approximation of the likelihood function in some Hilbert space. For this method only some functionals of the distribution are required, such as probability generating function or characteristic function. We adopt this method for the case of discrete stable distributions and in a simulation study show the performance of this method.
This paper deals with continuous-time Markov decision processes with the unbounded transition rates under the strong average cost criterion. The state and action spaces are Borel spaces, and the costs are allowed to be unbounded from above and from below. Under mild conditions, we first prove that the finite-horizon optimal value function is a solution to the optimality equation for the case of uncountable state spaces and unbounded transition rates, and that there exists an optimal deterministic Markov policy. Then, using the two average optimality inequalities, we show that the set of all strong average optimal policies coincides with the set of all average optimal policies, and thus obtain the existence of strong average optimal policies. Furthermore, employing the technique of the skeleton chains of controlled continuous-time Markov chains and Chapman-Kolmogorov equation, we give a new set of sufficient conditions imposed on the primitive data of the model for the verification of the uniform exponential ergodicity of continuous-time Markov chains governed by stationary policies. Finally, we illustrate our main results with an example.