This paper presents a new model for computing optimal randomized security policies in non-cooperative Stackelberg Security Games (SSGs) for multiple players. Our framework rests upon the extraproximal method and its extension to Markov chains, within which we explicitly compute the unique Stackelberg/Nash equilibrium of the game by employing the Lagrange method and introducing the Tikhonov regularization method. We also consider a game-theory realization of the problem that involves defenders and attackers performing a discrete-time random walk over a finite state space. Following the Kullback-Leibler divergence the players' actions are fixed and, then the next-state distribution is computed. The player's goal at each time step is to specify the probability distribution for the next state. We present an explicit construction of a computationally efficient strategy under mild defenders and attackers conditions and demonstrate the performance of the proposed method on a simulated target tracking problem.
In two subsequent parts, Part I and II, monotonicity and comparison results will be studied, as generalization of the pure stochastic case, for arbitrary dynamic systems governed by nonnegative matrices. Part I covers the discrete-time and Part II the continuous-time case. The research has initially been motivated by a reliability application contained in Part II. In the present Part I it is shown that monotonicity and comparison results, as known for Markov chains, do carry over rather smoothly to the general nonnegative case for marginal, total and average reward structures. These results, though straightforward, are not only of theoretical interest by themselves, but also essential for the more practical continuous-time case in Part II (see \cite{DijkSl2}). An instructive discrete-time random walk example is included.
This second Part II, which follows a first Part I for the discrete-time case (see \cite{DijkSl1}), deals with monotonicity and comparison results, as generalization of the pure stochastic case, for stochastic dynamic systems with arbitrary nonnegative generators in the continuous-time case. In contrast with the discrete-time case the generalization is no longer straightforward. A discrete-time transformation will therefore be developed first. Next, results from Part I can be adopted. The conditions, the technicalities and the results will be studied in detail for a reliability application that initiated the research. This concerns a reliability network with dependent components that can breakdown. A secure analytic performance bound is obtained.
We propose unified approach for analysis of finite discrete time Markov chains. We show some possibilities for computing stationary probabilty vectors even in the case of reducible transition matrix. Finally we show one method for computing of Mean First Passage Times Matrices. and Obsahuje seznam literatury
In what concerns extreme values modeling, heavy tailed autoregressive processes defined with the minimum or maximum operator have proved to be good alternatives to classical linear ARMA with heavy tailed marginals (Davis and Resnick [8], Ferreira and Canto e Castro [13]). In this paper we present a complete characterization of the tail behavior of the autoregressive Pareto process known as Yeh-Arnold-Robertson Pareto(III) (Yeh et al. [32]). We shall see that it is quite similar to the first order max-autoregressive ARMAX, but has a more robust parameter estimation procedure, being therefore more attractive for modeling purposes. Consistency and asymptotic normality of the presented estimators will also be stated.