Probabilistic models reliability of information and control systems

The paper deal with devoting the problem of reliability prediction and modeling the functioning process of information and control systems (ICS), which are described by the Markov random process. The methodology of the study consist in considering ICS as Markov systems. Modeling the functioning process of information and control systems will increase authenticity and reliability of information from the ICS, as well as improve the efficiency of its work. As an example, the mathematical model of the functioning the computer system of the dispatch service is constructed in the paper. Mathematical modeling was conducted in the environment of MathCad. Mathematical modeling allows determining the average number of steps before system moves into the absorbing state. Taking into account the length of stay the system in each condition allows determining the average life of the system until the full failure. The development of model and technology for calculating the characteristics of stochastic systems allows investigating a wide class of such systems and determining optimal maintenance operation term in order to ensure reliability and reduce the waste on downtime.


Introduction
Modern stage of development in the science and technology is characterized by complication and automation of the control process of complication technical systems and objects.Information and control systems (ICS) are used to control modern sophisticated technical systems.ICS is intended to analyze the internal state of the under controlled system or object and its outside environment.Based on such analysis, the ICS make a decision about the necessity for changes in the system working parameters or its operation mode.Modern information and control systems are extremely complex.Their operation requires high qualified staff, with the ability to correctly analyze and evaluate the authenticity, reliability, and accuracy of the information, provided by the ICS as it shown in Fedorov, Mihaylov & Suhih (1994) and Vorobev (1988).ICS is a connecting link between the controlled system (object) and the staff.In this case, the staff will initiate the control commands in the manual or semiautomatic mode or perform the function of control in automatic operation.An example of modern ICSs can be on-board information and control systems of the modern aircrafts, smart cars and homes and other smart objects.The main structural element of any ICS is a computing system that generates information and controlling signals and performs calculations.There are two approaches for improving the ICS, independent of the type of controlled system: the formation the new principles (laws) of control and development the new methods and support means for decision-making based on the information from ICS.One approach to improve the ICS operational efficiency, as well as increasing reliability and veracity of information, which is prediction of ICS and controlled object, in general.The purpose of this paper is to simulate the process operation of the ICS, described by the Markov random process, as well as the development of the algorithm and computer model implementation in Mathcad.

Literature review
The issue of forecasting the state of the system, in order to increase its reliability, is considered by various authors as in a general reliability theory, and in the reliability theory of specific systems.In Ostreykovskiy (2003) has discussed the issue of renewable systems maintenance in order to increase its reliability during operation.In Derzho (2007), has proposed using analytical models for the quantification of maintenance periodicity.This allows determining the optimal periods of the prophylaxis based on the criteria of the restored elements reliability, the function of readiness and the functional of technical use.In Polovko (2008) are considered the use of the universal Weibull-Gnedenko distribution to determine the time to system failure, taking into account the stages of bedding-in, normal operation and aging of the system.In Finaev, Pavlenko & Zargaryan (2007) and Belyaev, Bogatyirev et al (1985) are considered, the simulated analytic-statistical modeling of the system.Furthermore, in Finaev, Pavlenko & Zargaryan (2007) are considered the use of the embedded Markov chain to determine the system`s containment probability in a certain state at the current stage of the process.Random Markov processes the most used for studying the processes of transition of a system from one state to another.Random Markov processes are considered in Tihonov & Mironov (1977).Their peculiarity is that the probability of any state of the system in the future depends only on the present state and does not depend on the past state and on how the system got to present state.In Turkeltaub (1966) are considered the maximum number of required tests.Therefore, the question of probabilistic models reliability of information and control systems are not review sufficiently.

Research methods
ICSs are characterized by the fact that at any given time they can be reside in one of probable state.The number of the system states depends on the number of system elements and the number of possible states each of them.To evaluate the reliability of such systems, a probabilistic model, which is described by the Markov process -the Markov chain, can be used as it shown in Barlou & Proshan (1969) and Cherkesov (2005).In the Markov chain, the considered system is divided into separate states , and at any specific time, the system is in one of these states.The probability of transitioning a system from one state to another depends on the current state of the system and does not depend on how the system has reached its current state.

ISSN 2520-2979
Journal of Sustainable Development of Transport and Logistics,3(1), 2018 ‹ 62 › To describe any system, an initial probability vector ) ,..., ( 1 n q q q  must be given, where the element i q is the likelihood that current state of the system , i s is a state that can be known or be determined by some probabilistic rule.
In order to describing the behavior of the system with the aid of Markov processes, it is necessary to: -introduce the concept of the technical system state; -show all states in which the system is able to be; -make a state graph, that is, specifying the ways of possible direct transitions of the system from an state to another state; -to show in what state the system is located at the initial moment of time or to set the distribution of the initial states; -set the transition-probability matrix of the Markov chain from state i to state j for one step.
According to a formula of full probability, we have the transition probability of the system in nsteps: This formula reflects the fact that in order to state transition from state i to state j in n steps, it is necessary, at first, to transit from state i to state k in (n-1)-steps and then transit from state k to state j in one-step.In the matrix form, formula (1) is written as: The Markov chain is completely specified by the transition probability matrix P in combination with the initial vector of states q.
If the initial probability distribution of and transition probability is known, then state probability in the n-th step is unambiguously calculated by the formula:


In the theory of Markov processes, the following fact is established: for a finite homogeneous Markov chain, always exist limit probability , lim that does not depend on the initial probability distribution.Probability distributions is called stationary.In this case, the values j  are unambiguously determined by equalities: In the matrix form, this formula is written as: where P is the transition matrix , and I is a unit matrix.These quantities can be determined by the transition probabilities of the transitions matrix R. In the general case, the transition matrix P can be represented in a canonical form: where  I the unity matrix,  0 the zero matrix,  R the matrix of transition probabilities from the unstable states in the absorbing,  Q the transition matrix from stable states in the unstable.
The fundamental matrix of the absorbing circuit is the matrix For the absorbing Markov chain, the mean time (the number of transitions) to reach the non-absorbing state j s , on the condition that the initial state is non-absorbing , i s is equal to the element ij S of the fundamental matrix S. The average number of steps before absorption, on the condition that at the initial moment the process was in a non-absorbing state , i s is equal to the sum of elements of the i-th line of the matrix S.
The space of states of the system, which consist of n elements, can be represented as a graph, in which each i-th vertex ) ,..., 1 ( n i  corresponds to the i-th state and is characterized by the transition probability ij p from the i-th vertex to j-th.

Research results
To illustrate how the space of states and the transitions diagram can be constructed for the ICS, consider the mathematical model of the functioning for the computer system of the bus fleet's dispatch service.
The main element of the system is a computing center, which consists of two computers, included in parallel.One of the computers is running constantly and used directly to solve problems, and the other on stand-by or undergoing preventive maintenance.Preventive maintenance of computers is carried out after 0 t hours in operation.The system is constructed in such a way that if both computers pass into the state of preventive maintenance, it lead to stop its work.
For convenience, we denote one of the computers with the symbol A, and the second one with the symbol B. Then the possible states of the system can be marked as follows: - is in a state of preventive maintenance.
Let denote through 0 t -the planned period before the prevention,  -time of prevention.Assume that the hours in operation of computers to maintenance checkup and duration of maintenance checkup obey exponential laws with parameters  and . Preventive maintenance of the system is desirable to organize in such a way that it can detect those failures that cannot be detected during the functioning of computers.In fig. 1 represent a diagram of possible system transitions in the state space.There are 5 possible states in the system, numbered from 1 to 5. As vertices 1-5, we take all possible states of the system and conduct by arrows from each state i in each state j if the probabilities of transitions .Transitions of the system from one working state to another occur along the perimeter of the graph, as shown in the figure .If the main computer becomes preventive maintenance at the moment when the second is also in a state of prevention, then the system falls into the state of failure.In the considered system, the unfavorable state is state 5.

Figure 1: The space of states of the system with two elements
The considered process is represented by Markov continuous chain.The probabilities of its transitions are determined as follows: accumulated distribution the minimum of two stochastic value and which are determined the system time before the maintenance checkup and the time of the maintenance checkup.
The matrix of transitional probabilities in explicit form is presented in Table 1.
Table 1: System transition probabilities The numerical values of the transition probabilities are contained in the matrix p , which is calculated by constructing the algorithm in Mathcad.For example, given the input data of the system, the probability of its stay in state 1 ) (

Discussion of the results
Let's define the characteristics of the system operation process that characterize its reliability: the average number of system transitions from state i to state j; the average number of system transitions in state j from the state i before the system enters to the critical state; stationary distribution the probabilities of system states; the number of steps required by the system for the first achievement the state j upon condition that it is exit from the state i.
Input information: hours in operation of system before transition in the state of maintenance checkup 24 0  t hours, duration of maintenance checkup Transition probability are defined in the matrix p.
Solution algorithm in MathCad.

Input data
The numeric value of transition probability The matrix P is stochastic, whenever the sum of the elements of its rows is equal to 1.
1) Let`s find the average number of the system transition in the state j from the state i before the system enters the critical state -state 5

) (
Journal of Sustainable Development of Transport and Logistics,3(1), 2018 determine the average number of system transition in the state j until they reach the critical state 5. From the matrix m it is seen that the system is in a state of 1 -2,3 times, in a state of 2 -17,6 times, in a state of 3 -1 times, in a state 4 -about 8 times before it transition in a state of complete maintenance checkup -in the state 5.
The average number of steps before the system transition to the absorbing state 5, provided that the initial state is non-absorbing, is equal to the sum of elements of the lines of the matrix m: 2) Let`s find the probabilities stationary distribution of system states  and also set the vector q of the second member of equation The received stationary distribution of the system state Probability i  are characterized ratio the system duration of stay in the corresponding states.The largest part of the time the system stay in the state of 5, the lowest in the states 1 and 3.
3) The mean number of system transition in the state j between two successive transition in the state i is defined as the probability ratio i and   j : 4) Quantity determination the steps, which required to the system for the first achievement of state j upon condition that it exit from the state i is of interest.Denote this stochastic variable through is mean time before the first system transition from the state i into state j.
The values ij M can be determined from the following equation: .
Diagonal elements of matrices M can be found using the stationary distribution probability of system states.Received: Journal of Sustainable Development of Transport and Logistics,3(1) ), we write the coefficients matrix of the system A1 and the vector of right-hand sides B in the form: Using a method of inverse matrix evaluate ij M in the form of vector Continuing this process for n j ,..., 2  , we define the required matrix M The solution of the matrix M can also be obtained by applying a fundamental matrix for ergodic Markov chains

For
reliability research of large systems, two classes of Markov chains are considered: absorbing chains and ergodic chains.Absorbing chains are chains which has absorbing states; ergodic chains are chains that contain a single closed set with When investigating of absorbing Markov chains are of interest the following values: ISSN 2520-2979 Journal of Sustainable Development of Transport and Logistics, 3(1), 2018 ‹ 63 › 1. Transition probability to the absorbing state j s , on the condition that the process began in a non-absorbing state .i s 2. The mean residence time of the process in the non-absorbing state i s until its transition to a certain absorbing state j s .3. The mean number of step before the process transition to a certain absorbing state j s on the condition that the initial state i s is non-absorbing.
For example, if the system is in the state 1, this means that the computer a A is used as the main one while the computer r B is in reserve.If during the time 0 t , which is measured from the moment of getting into the state 1, the computer a A is in the state of 2, then the computer a A goes into the state of prevention p A , and the main becomes the computer a B to the transition of the first computer to the state of preventive maintenance, and the second -from the backup state to the state of the main.The probability of such transition equals to to the classification of the Markov chains states is absorbing state.By eliminating the 5-th row and the 5-th column of the matrix p I  , which correspond to the state 5, where I is the unit matrix, we find the fundamental matrix is a matrix in which the diagonal elements equals i  , and all others are equal with 0.For ergodic Markov chains, M is determined by the formula