Keywords

1 Introduction

Technical and technological systems assuming multiple possible states are known as multi-state systems (MSS). Any system that is allowed to assume a finite number of performance rates can be modeled by means of a multi-state system.

Such modeling approaches, which are more realistic and provide more accurate representations of engineering systems, are much more complex and present major difficulties in system definition and performance evaluation. MSS reliability has received a substantial amount of attention in the past four decades with basic concepts being introduced in the 70s by [1,2,3,4]. Extensions and generalizations can be found in [5,6,7]. Essential achievements that were attained up to the mid 1980s are reflected in [8, 9], where one can find the state of the art in the field of MSS reliability. For theoretical advances and significant applications in MSS reliability theory in recent years, the reader is referred to [10,11,12]. For general references on continuous-time semi Markov systems and associated reliability topics, one can see [13,14,15,16,17].

Consider a process defined on a probability space \((\varOmega , \mathscr {F}, \mathbb P)\) with state space \(E=\{1,2,\ldots ,N\}\). For example, state “N” is associated with nominal performance of the system and state “1” is associated with total failure. Markov processes represent typical tools for modelling such a system. In this work we focus on multi state systems that we model by means of semi-Markov processes, which generalize typical Markov jump processes by allowing general distributions for sojourn times [13]. For this reason, the semi-Markov processes are more adapted for reliability studies (and for applications in general).

The sojourn times in a given state are assumed to belong to a general class of distributions, cf. Relation (2). The interest of this distribution class is twofold. First, it is worth noticing that the class is closed under extrema (cf. [18]) and secondly it unifies under a single umbrella, not only some discrete distributions but also and more importantly, several typical reliability distributions like the exponential, Weibull, Rayleigh and Pareto distributions.

In this chapter we consider a special case of the semi-Markov system introduced in [19]. In that article, the system under study depended on some parameters \(a_{ij},\) with ij belonging to the state space E. In the present work, the dependence of the parameters \(a_{ij}\) on the states i and j is made explicit through a function g(ij). We also provide several examples of such a function g that could be of interest in different modeling situations, according to the application under study. It is important to indicate that time- and state-varying parameters become quite popular since more and more systems are subject to dynamic changes. Note that the problem of time- and state-varying parameters has received in recent years, increased attention (see e.g. [20,21,22,23]) because of an ever-growing body of evidence that typical assumptions of stable parameters often appear invalid.

The chapter is organized as follows. Some preliminaries regarding semi-Markov processes are presented in Sect. 2; here we also describe the special case of the semi-Markov system introduced in [19], developed in a multi-state system framework. In Sect. 3 we describe the class of distributions considered in this work and we propose a special parametrization of these distributions. In Sect. 4 the likelihood function and the associated maximum likelihood estimators of the parameters of interest are provided.

2 A Special Case of Semi-Markov Multi-state Systems

As previously mentioned, we assume that the random system has finite state space \(E=\{1,\ldots , N\}\), \(N < \infty \) and its time evolution is governed by a stochastic process \(Z=(Z_t)_{t\in \mathbb {R}_+}.\) Let us denote by \(S=(S_n)_{n\in \mathbb N}\) the successive time points when state changes in \((Z_t)_{t\in \mathbb {R}_+} \) occur and by \(J=(J_n)_{n\in \mathbb N}\) the successive visited states at these time points. Set also \(X=(X_n)_{n\in \mathbb N}\) for the successive sojourn times in the visited states. Thus, \(X_n=S_n-S_{n-1},\) \(n \in \mathbb N^*,\) and, by convention, we set \(X_0=S_0=0.\)

Let us first recall the definition of a Markov renewal and semi-Markov process (cf. [13]). If \(( J, S) = (J_n, S_n)_{n \in \mathbb N} \) satisfies the relation

$$\begin{aligned}&\mathbb P (J_{n+1}=j, S_{n+1}-S_n \le t | J_0, \ldots , J_n; S_1, \ldots , S_n)\\&= \mathbb P (J_{n+1}=j, S_{n+1}-S_n \le t |J_n),\,j\in E, t\in \mathbb {R}_+, \end{aligned}$$

then

  • (JS) is called a Markov renewal process (MRP);

  • \(Z=(Z_t)_{t\in \mathbb {R}_+} \) is called a semi-Markov process (SMP) associated to (JS),  where

    $$ Z_t :=J_{N(t)} \quad \Leftrightarrow \quad J_{n}=Z_{S_n}, $$

    with

    $$\begin{aligned} N(t):= \max \{n \in \mathbb {N} \mid S_n \le t \}, \; t \in \mathbb {R}_+, \end{aligned}$$
    (1)

    the counting process of the number of jumps in the time interval (0, t]. Thus, \(Z_t\) gives the state of the system at time t.

If \((J_n, S_n)_{n \in \mathbb N} \) is a MRP, it can be immediately checked that \((J_n)_{n \in \mathbb N}\) is a Markov chain, called the embedded Markov chain.

All along this work we assume that the SMP (or equivalently, the MRP) is regular, irreducible and positive-recurrent (see, e.g., [13, 24, 25] for more details on these notions).

A SM model is characterized by its initial distribution \(\alpha =(\alpha _1, \ldots , \alpha _N)\)

$$\begin{aligned} \alpha _{j}:=\mathbb {P}(J_0=j), \ j \in E, \end{aligned}$$

and by the semi-Markov kernel

$$\begin{aligned} Q_{ij}(t):=\mathbb {P}(J_{n}=j,X_{n} \le t|J_{n-1}=i). \end{aligned}$$

Let us also introduce the transition probabilities of the embedded Markov chain \((J_n)_{n \in \mathbb N},\)

$$\begin{aligned} p_{ij}:=\mathbb {P}(J_n=j|J_{n-1}=i)=\lim _{t \rightarrow \infty }Q_{ij}(t), \end{aligned}$$

and the conditional sojourn time distribution functions

$$\begin{aligned} W_{ij}(t):= & {} \mathbb {P}(S_{n}-S_{n-1}\le t|J_{n-1}=i, J_n=j)\nonumber \\= & {} \mathbb {P}(X_{n} \le t|J_{n-1}=i, J_n=j). \end{aligned}$$

Observe that

$$\begin{aligned} Q_{ij}(t)=p_{ij}W_{ij}(t). \end{aligned}$$

In the sequel, we will consider a special case of semi-Markov system introduced in [19]. As it will be seen in the next section, we will consider here a particular parametrization of this system.

Let us assume that we have at our disposal a collection of positive random variables \(T_{ij},\) that can be seen as potential times spent in state i before moving (directly) to state j. We denote by \(F_{ij}(t; \theta _{ij})\) its cumulative distribution function (cdf), where \(\theta _{ij}\) is the m-dimensional parameter involved in the underlying distribution. We assume that the distribution of \(T_{ij}\) is absolutely continuous with respect to the Lebesgue measure; an associated density is denoted by \(f_{ij}(t; \theta _{ij}).\)

The dynamic of the system is as follows: the next state to be visited after state i is the one for which \(T_{il}\) is the minimum, \(l \in E\). This is the way the next state to be visited, say j,  is “chosen”, namely \(j=argmin_{l \in E}(T_{il})\). Thus, for our semi-Markov system, the semi-Markov kernel becomes

$$ \begin{aligned} Q_{ij}(t)= & {} \mathbb {P} (\min _{l} \, T_{il} \le t \,\,\, \& \text{ the } \text{ min } \text{ occurs } \text{ for } \, j |J_{n-1}=i )\\= & {} \mathbb {P} ( \min _{l} \, T_{il} \le t,{{T}_{ij}}\le {{T}_{il}},\forall l |J_{n-1}=i )\\= & {} \mathbb {P} ( \min _{l} \, T_{il} \le t|J_{n-1}=i, J_{n}=j) \times \mathbb {P} ({{T}_{ij}}\le {{T}_{il}},\forall l |J_{n-1}=i )\\= & {} p_{ij} W_{i}(t), \end{aligned}$$

where

$$\begin{aligned} p_{ij}= & {} \nonumber \mathbb {P}(J_n=j|J_{n-1}=i) = \mathbb {P}(T_{ij} \le T_{il}, \forall l | J_{n-1}=i ) \end{aligned}$$

and

$$\begin{aligned} W_{ij}(t)= & {} \mathbb {P}(S_{n}-S_{n-1}\le t|J_{n-1}=i, J_n=j)\\= & {} \mathbb {P}(\min _{l} \, T_{il} \le t |J_{n-1}=i, J_n=j) \\= & {} \mathbb {P}(\min _{l} \, T_{il} \le t | J_{n-1}=i) =: W_i(t), \text{ independent } \text{ of } j, \end{aligned}$$

which represents the cdf of the sojourn time in state i (unconditional to the next state to be visited). Note that

$$\begin{aligned} \sum _{j}Q_{ij}(t)=W_i(t). \end{aligned}$$

Let us assume that \(W_i(t)\) is absolutely continuous w.r.t. the Lebesgue measure and has a density denoted by \(f_i(t)\).

As we will be dealing in the sequel with parametric inference, whenever a quantity of interest will depend on a parameter \(\theta \in \varTheta \subset \mathbb {R}^m\), we may set this parameter as an argument. For instance, if \(Q_{ij}(t)\) depends on some parameter \(\theta ,\) we could denote it by \(Q_{ij}(t; \theta )\).

Our intention is to provide estimators of \(p_{ij}\), \(W_i(t)\), and \(Q_{ij}(t)\) under a general class of distributions, with a specific parametrization. This class of distributions and the corresponding parametrization are presented and discussed in the next section.

3 Parametric Specification of the System

The type of distributions considered for the random variables \(T_{ij}\) are first presented in this section. Then, a specific parametrization is considered. More specifically, we consider the case where the distributions \(F_{ij}(\cdot ; \theta _{ij})\), \(i, j = 1, \ldots , N,\) are of the same functional form but with different parameters, i.e., we are focusing on independent but not necessarily identically distributed (inid) random variables. Nonidentically but independently distributed random variables are usually not easy to deal with. But, when these belong to families of random variables closed under maxima or minima then elegant expressions of various statistical characteristics such as order statistics are possible. A member of such a class of distribution functions with parameter a is assumed to verify the following distributional form

$$\begin{aligned} F(x;a)=1- \left( 1-F(x;1) \right) ^a.\end{aligned}$$
(2)

Let us assume that F(xa) is absolutely continuous w.r.t. the Lebesgue measure and let us denote its density by f(xa), namely

$$\begin{aligned} f(x;a)=a\left( 1-F(x;1) \right) ^{a-1}f(x;1). \end{aligned}$$
(3)

The following result states that the minimum order statistic from an inid random sample from the above class has a distribution belonging to the same class.

Lemma 1

(cf. [18]) Let \(X_1, \ldots , X_N\) be inid random variables such that \(X_i \sim F(x;a_i)\) which belongs to class (2). Then the distribution function \(F^{(1)}\) of the minimum order statistic \(X_{(1)}\) belongs also to (2).

It is worth noticing that examples of distribution that belong to class (2) are the geometric distribution, the Pareto distribution, the Weibull distribution and its special cases like the exponential, the Rayleigh and the Erlang truncated exponential.

Let us now assume that the random variables \(T_{ij}\) considered in the previous section belong to the class (2), with the corresponding parameters \(a_{ij},\) i.e., the corresponding cumulative distributions \(F(t; a_{ij})\) verify (2). Moreover, we assume that we have a parametrization for \(a_{ij}\) that makes explicit the dependence on the states i and j. To be more specific, let us assume that \(a_{ij}\) has the expression

$$\begin{aligned} a_{ij} := a_{\infty }\left( 1-e^{g(i,j)/e_1} \right) , \end{aligned}$$
(4)

where \(a_{\infty }\) and \(e_1\) are real parameters, while g(ij) is a known function of states i and j,  depending on certain parameters. Typical examples of g(ij) can be obtained by considering

$$\begin{aligned} g(i,j) := c_1 i^{k_1} j^{l_1} + c_2 i^{k_2} j^{l_2}, \end{aligned}$$
(5)

where \(c_m, k_m, l_m, m = 1, 2,\) are real parameters. Examples of such a function g that could be of interest in different modeling situations, according to the application under study, could be:

$$\begin{aligned} g(i,j)= & {} i + j, \end{aligned}$$
(6)
$$\begin{aligned} g(i,j)= & {} c_1 i + c_2 j, \text {with } c_1 + c_2 = 1,\end{aligned}$$
(7)
$$\begin{aligned} g(i,j)= & {} \sqrt{i j}, \end{aligned}$$
(8)
$$\begin{aligned} g(i,j)= & {} (i j)^c, c \in \mathbb R. \end{aligned}$$
(9)

Remark 1

  1. 1.

    Note that this parametrization is done by analogy with a framework considered in [20], where the times between two successive failures are assumed to be inid random variables distributed according to a cumulative distribution \(F(x;a_{i})\) belonging to the class (2) with different scale parameters \(a_{i}.\) These parameters are assumed to be time varying; one type of variation along time proposed in that article is of the type \(a_{i}=a_{\infty }\left( 1-e^{-t_i/e_1} \right) , i= 1, 2, \ldots ,\) where \(t_1, t_2, \ldots \) are observed successive failure times. Nonetheless, note that, in the present chapter, the variation is on both states i and j, while in [20] the variation is along time.

  2. 2.

    Note that, if we consider a semi-Markov system with only one state (\(E=\{1\}\)), we are in the framework of [20], where \(S_n, n=1, 2, \ldots ,\) are the successive failure times of a system, \(S_n < S_{n+1},\) and \(S_0:=0,\) while \(X_n := S_n - S_{n-1}, n=1, 2, \ldots ,\) are the times between two successive failures. It is clear that, in this case, there is no state variation anymore and a modeling like the one proposed in [20] would be appropriate.

Under these conditions, the following result concerning the main semi-Markov characteristics can be proved. For notational convenience, we set \(F(t):=F(t;1)\), \(f(t):=f(t;1)\) and \({{Q}_{ij}}\left( t; a_{ik};k=1,\ldots ,N \right) :=Q_{ij}(t).\)

Proposition 1

(cf. [19]) Under the setup of this section, the following results hold:

$$\begin{aligned} {{Q}_{ij}}(t) =\frac{a_{ij}}{\sum \limits _{k \in E}a_{ik}}\left[ 1-\left( 1-F(t)\right) ^{\sum \limits _{k \in E}a_{ik}}\right] , \end{aligned}$$
(10)
$$\begin{aligned} p_{ij}=\frac{a_{ij}}{\sum \limits _{k \in E}a_{ik}}, \end{aligned}$$
(11)
$$\begin{aligned} W_i(t)=1-\left[ 1-F(t) \right] ^{\sum \limits _{j=1}^{N}a_{ij}} \end{aligned}$$
(12)

and

$$\begin{aligned} f_i(t)=\sum \limits _{j=1}^{N}a_{ij}\left( 1-F(t) \right) ^{\sum \limits _{j=1}^{N}a_{ij}} \frac{f(t)}{1-F(t)}. \end{aligned}$$
(13)

4 Maximum Likelihood Estimation

In this section we consider the problem of obtaining the maximum likelihood estimators of the parameters of the system (\(a_{\infty }\) and \(e_1,\) or, equivalently, \(a_{ij}\)). Then we will get the corresponding plug-in estimators of the main quantities defining the semi-Markov system.

Basically, two important statistical settings could be considered: either we start with one sample path, or with several sample paths. In both cases, it can be assumed that the sample paths are complete or that the sojourn time in the last visited state can be right censored (lost to follow-up, for instance). In the sequel we consider the most general case, that is the one of several sample paths with possible censored last sojourn time. The other cases can be obtained from the one we present, as a particular case; we will also give some details on this point.

Given L sample paths of a semi-Markov process censored at time M

\(\left\{ j_0^{(l)}, x_1^{(l)}, j_1^{(l)}, x_2^{(l)}, \ldots , j_{N^l(M)}^{(l)}, u_M^{(l)}\right\} ,\) \(l=1,\ldots , L\), then the associated likelihood is

$$\begin{aligned} {\mathscr {L}}= & {} \left( \prod _{i \in E}\alpha _{i}^{N_{i,0}(L)}\right) \left( \prod _{i,j \in E}p_{ij}^{\sum \limits _{l=1}^{L}N_{ij}^{(l)}(M)}\right) \times \nonumber \\\times & {} \left( \prod _{l=1}^{L} \prod _{i \in E}\, \prod _{k=1}^{N_i^{(l)}(M)} f_i(x_i^{(l,k)}) \right) \prod \limits _{i \in E}\prod _{k=1}^{N_{i,M}(L)} \left( 1-W_i(u_i^{(k)}) \right) , \end{aligned}$$
(14)

where we set

  • \(N_{i,0}^{(L)} := \sum \limits _{l=1}^{L}\mathbbm {1}_{\{J_0^{(l)}=i\}}\): the number of sample paths starting in state i;

  • \(N_i^{(l)}(M)\): the number of visits to state i up to time M of the lth trajectory, \(l=1, \ldots , L\);

  • \(N_{i}(L,M) :=\sum \limits _{l=1}^{L}N_{i}^{(l)}(M)\): the total number of visits to state i up to time M along the L trajectories;

  • \(N_{ij}^{(l)}(M)\): the number of transitions from state i to state j up to time M during the lth trajectory, \(l=1, \ldots , L\);

  • \(N_{ij}(L,M) :=\sum \limits _{l=1}^{L}N_{ij}^{(l)}(M)\): the total number of transitions from state i to state j up to time M along the L trajectories;

  • \(x_i^{(l,k)}\): the sojourn time in state i during the kth visit, \(k=1, \ldots , N_i^{(l)}(M)\) of the lth trajectory, \(l=1, \ldots , L\);

  • \(u_M^{(l)}:=M-S_{N^l(M)}\) is the observed censored time of the lth trajectory;

  • \(N_{i,M}(L)=\sum \limits _{l=1}^{L}\mathbbm {1}_{\{J^{(l)}_{N^l(M)}=i\}}\) is the number of visits of state i, as last visited state, over the L trajectories; note that \(\sum \limits _{i \in E}N_{i,M}(L)=L;\)

  • \(u_i^{(k)}\) is the observed censored sojourn time in state i during the kth visit, \(k=1,\ldots ,N_{i,M}(L)\).

Note that, for \(L = 1,\) the likelihood given in (14) reduces to the likelihood of 1 trajectory. Note also that, if the censoring time M in a certain trajectory l is a jump time, then for the corresponding observed censored time we have \(u_M^{(l)}=0\). Consequently, the contribution to the likelihood of the associated term will be equal to 1. For this reason, if no censoring is involved, the uncensored likelihood can be obtained as a particular case of (14).

For the class of distributions given in (2), the likelihood takes the form

(15)

where \(a_{ij}\) has been given in (4). Consequently, the log-likelihood has the expression

$$\begin{aligned} \log ({\mathscr {L}})= & {} \log \left( \prod _{i \in E}\alpha _i^{N_{i,0}^{(L)}}\right) + \sum \limits _{l=1}^{L}\sum \limits _{i,j \in E} N_{ij}^{(l)}(M) \log (a_{ij}) \nonumber \\&+ \sum _{l,i,k} \left( \sum \limits _{j \in E}a_{ij}\right) \log \left( 1-F\left( x_i^{(l,k)}\right) \right) + \log \prod _{l,i,k} \left( \frac{f\left( x_i^{(l,k)}\right) }{1-F\left( x_i^{(l,k)}\right) }\right) \nonumber \\&+ \sum _{i \in E} \sum \limits _{k=1}^{N_{i,M}(L)} \left( \sum \limits _{j \in E}a_{ij}\right) \log \left( 1-F\left( u_i^{(k)}\right) \right) . \end{aligned}$$
(16)

Using \(a_{ij} = a_{\infty }\left( 1-e^{g(i,j)/e_1} \right) ,\) taking the derivatives of \(\log ({\mathscr {L}})\) with respect to \(a_{\infty }\) and \(e_1\) we obtain the critical equations:

$$\begin{aligned} \frac{\partial \log {\mathscr {L}}}{\partial a_\infty }= & {} \sum \limits _{l=1}^{L}\sum \limits _{i,j \in E} N_{ij}^{(l)}(M) \frac{1}{a_\infty } + \sum _{i \in E} \sum _{l=1}^L \sum _{k=1}^{N_i^{(l)}(M)} \sum _{j \in E}\left( 1-e^{g(i,j)/e_1}\right) \log \left( 1-F\left( x_i^{(l,k)}\right) \right) \nonumber \\&+ \sum _{i \in E} \sum _{k=1}^{N_{i,M}(L)} \sum _{j \in E}\left( 1-e^{g(i,j)/e_1}\right) \log \left( 1-F\left( u_i^{(k)}\right) \right) =0, \end{aligned}$$
(17)
$$\begin{aligned} \frac{\partial \log {\mathscr {L}}}{\partial e_1}= & {} \sum \limits _{l=1}^{L}\sum \limits _{i,j \in E} N_{ij}^{(l)}(M) \frac{e^{g(i,j)/e_1}}{1-e^{g(i,j)/e_1}} \frac{g(i,j)}{e_1^2}\nonumber \\&+ \sum _{l,i,k} \sum _{j \in E}\left( a_{\infty } e^{g(i,j)/e_1} \frac{g(i,j)}{e_1^2} \right) \log \left( 1-F\left( x_i^{(l,k)}\right) \right) \nonumber \\&+ \sum _{i \in E} \sum _{k=1}^{N_{i,M}(L)} \sum _{j \in E}\left( a_{\infty } e^{g(i,j)/e_1} \frac{g(i,j)}{e_1^2} \right) \log \left( 1-F\left( u_i^{(k)}\right) \right) =0. \end{aligned}$$
(18)

Equation (17) provides an explicit expression of \(a_{\infty }\) in terms of \(e_1\)

$$\begin{aligned}&a_\infty =\\&\nonumber - \frac{\sum \limits _{l=1}^{L}\sum \limits _{i,j \in E} N_{ij}^{(l)}(M)}{\sum _{i \in E} \sum _{j \in E} \left( 1-e^{g(i,j)/e_1}\right) \left[ \sum _{l=1}^L \sum _{k=1}^{N_i^{(l)}(M)} \log \left( 1-F\left( x_i^{(l,k)}\right) \right) + \sum _{k=1}^{N_{i,M}(L)} \log \left( 1-F\left( u_i^{(k)}\right) \right) \right] }. \end{aligned}$$
(19)

This expression replaced in Eq. (18) provides an equation in \(e_1\) that has to be solved numerically. Thus we obtain the corresponding MLEs \(\widehat{a}_\infty (L,M)\) and \(\widehat{e}_1(L,M)\) and also the corresponding plug-in estimator of \(a_{ij},\)

$$\begin{aligned} \widehat{a}_{ij}(L,M) = \widehat{a}_{\infty }(L,M)\left( 1-e^{g(i,j)/\widehat{e}_1(L,M)} \right) . \end{aligned}$$
(20)

Consequently, using Proposition 1, we get the plug-in estimators of the main quantities that define the semi-Markov system, namely \(p_{ij}\), \(W_i(t)\) and \(Q_{ij}(t)\):

$$\begin{aligned} \widehat{p}_{ij}(L,M)=\frac{\widehat{a}_{ij}(L,M)}{\sum \limits _{l \in E}\widehat{a}_{il}(L,M)}=\frac{N_{ij}(L,M)}{N_i(L,M)}, \end{aligned}$$
(21)
$$\begin{aligned} \widehat{W}_i(t; L, M)=\left[ 1-\left( 1-F(t)\right) ^{\sum \limits _{j \in E}\widehat{a}_{ij}(L,M)}\right] \end{aligned}$$
(22)

and

$$\begin{aligned} \widehat{Q}_{ij}(t; L, M)=\frac{\widehat{a}_{ij}(L,M)}{\sum \limits _{k \in E}\widehat{a}_{ik}(L,M)}\left[ 1-\left( 1-F(t)\right) ^{\sum \limits _{k \in E}\widehat{a}_{ik}(L,M)}\right] . \end{aligned}$$
(23)

Note also that, once we have obtained the estimators of the basic quantities associated to a multi-state semi-Markov system, we can immediately obtain estimators of the associated reliability indicators, following the lines presented in [19].

Remark 2

A more general framework may be considered if some or all of the parameters involved in the function \(g(\cdot , \cdot )\) are assumed to be unknown. In such a case, the appropriate derivatives of the loglikelihood in (16) with respect to \(c_m, k_m, l_m, \ m=1,2,\) should be considered and the normal equations in addition to (17) and (18) should include the derivatives with respect to extra unknown parameters. In this more general setting, the system of equations has to be solved numerically for the estimators of the parameters to be obtained.

5 Concluding Remarks

In many settings the challenge is to determine if and where the parameters of the underlying model change their value. The rationales for time-varying parameter models may be several. For instance, the true coefficients themselves can often be viewed directly as the outcome of a stochastic process. Furthermore, even when the underlying parameters are stable, situations arise in which a time-varying coefficient approach will prove to be effective. More considerations could be provided on this topic. The present chapter deals with the problem in a general setting where a general class of distributions is considered with state-varying parameters. In particular in a multi-state system modeled by means of a special type of semi-Markov process, the parameters involved are assumed to be affected by the present state as well as the state to be visited and the likelihood together with the parameter estimates are provided under various dependency types of the parameters involved on the states of the system.