# Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start blogging!

# Continuous Time Markov Chains

### Goal

### Markov Chains

*probabilities*or

*rates*at which the system transitions occur.

*deterministically*, then it willspend exactly a specific amount of time in one state beforetransitioning to another state. The inverse of this time is the ratefor the transition.

*transition probabilities*rather than rates.

### Memory

*memoryless*fashion. Actually, few MCs that are not memoryless can easily beanalyzed in closed form, which is probably why we like model systemsamenable to memoryless MCs.

### Discrete Time Markov Chains

# jsMath problems in blogger

### Inline math

This is an example of $\frac{1}{\sqrt{2 \pi \sigma^2}}$ inline math

gives

This is an example of $\frac{1}{\sqrt{2 \pi \sigma^2}}$ inline math

### Equation Environment

\begin{equation}

p(x) = \frac{1}{\sqrt{2 \pi \sigma^2}} e^-x^2/{2 \sigma^2}

\end{equation}

gives

\begin{equation}p(x) = \frac{1}{\sqrt{2 \pi \sigma^2}} e^-x^2/{2 \sigma^2}\end{equation}

### Shorthand Equation Environment

$$p(x) = \frac{1}{\sqrt{2 \pi \sigma^2}} e^-x^2/{2 \sigma^2}$$

gives

$$p(x) = \frac{1}{\sqrt{2 \pi \sigma^2}} e^-x^2/{2 \sigma^2}$$

### <myserver>/jsMath/easy/load.js

# Poisson Processes

### Goal

Probability distribution, at time $t$ for $N$ events in a subsequent time interval $[t, t+\tau]$: $P[N,\tau,t]$. As we shall see, as a practical matter, we primarily focus on calculating $P[1, \tau, t]$.

### Assumptions

- For a small time interval $h$, as $h \to 0$, $P[1,h,t] \to \lambda h$ for all $t$ and $P[N>1,h,t] \to 0$. Therefore $P[0,h,t] \to 1 – \lambda h$.
- $P$ is independent of $t$: $P[N, \tau, t] = P[N, \tau]$ for all $t$. In other words, it is
*stationary*. - The probabilities of events in disjoint intervals are independent. If $t_2-t_1>\tau_1$ then,

$P[N_2,\tau_2,t_2 ; N_1,\tau_1,t_1] = P[N_2,\tau_2,t_2] \cdot P[N_1,\tau_1,t_1]$

Alternatively, consider four successive time points $t_4 \geq t_3 \geq t_2 \geq t_1$. Then:

$P[N_2,t_4-t_3,t_3 ; N_1,t_2-t_1,t_1] = P[N_2,t_4-t_3,t_3] \cdot P[N_1,t_2-t_1,t_1]$ - $P$ is
*memoryless*. We might feel as though having already waited a long time for an event to occur that the next event should happen sooner. However, since $P$ is*memoryless*, it does not remember how long it took for the last event to occur.

Note that this is not the same as being*stationary*.

### Derivation of $P[0,\tau]$

We wamt to derive an expression for $P[N,\tau]$ using the assumed properties listed above. To do so, we start by looking at probabilities for 0 events in time intervals. As an aid in understanding, think about the following situation: you start standing at a bus stop at a certain time $t_1$. You wait until a later time $t_2$ and observe that no bus has arrived. For some reason, you stick around until an even later time $t_3$ and notice that a bus has still not arrived. This is strange, because you know that the buses arrive in a way that obeys to assumptions of the Poisson process.Mathematically, consider three successive times $t_3>t_2>t_1$, and the joint probability

$P[0,t_3-t_1 ; 0, t_2 – t_1]$

By the independence of events in disjoint intervals,

$P[0,t_3-t_1 ; 0, t_2 – t_1] = P[0,t_3-t_2] \cdot P[0,t_2-t_1]$

Alternatively, due to the memoryless assumption, we have

$P[0,t_3-t_1 ; 0, t_2 – t_1] = P[0,t_3-t_1]$

This may seem counterintuitive, but it can be understood if we think abou the fact that we have *arbitrarily* divide the interval $t_3-t_1$ into two by inserting $t_2$. Doing so should not alter the probability! Therefore, we have the above expression.

Combining the two above, we have

$P[0,t_3-t_1] = P[0,t_3-t_2] \cdot P[0,t_2-t_1]$

This expression in hand, we now derive the analytical form for $P[0,\tau]$. To do so, we differentiate the equation by $t_2$. Along the way, we adopt some abbreviated notation that should be clear enough

$0 = p_0′(t_3-t_2) \cdot p_0(t_2-t_1) + p_0′(t_3-t_2) \cdot p_0(t_2-t_1)$

From this we conclude that

$\frac{1}{p_0(t_3-t_2)}\frac{dp_0(t_3-t_2)}{d(t_2)} = \frac{1}{p_0(t_2-t_1)}\frac{dp_0(t_2-t_1)}{d(t_2)}$

Using a standard technique in separation of variables, we note that the two sides of the equation are respectively functions of only $p_0(t_3-t_2)$ and $p_0(t_2-t_1)$. Therefore each side is separately equal to a constant, $\lambda$. Writing $t_2-t_1$ as $\tau$,

$\frac{1}{p_0(\tau)}\frac{dp_0(\tau)}{d\tau} = \lambda$

which has the unique solution

$p_0(\tau) = e^{-\lambda \tau + C}$

Since $p_0(0) = 0$,

$p_0(\tau) = e^{-\lambda \tau + C}$

### Derivation of $P[N,\tau]$

$p_0(\tau) = e^{-\lambda \tau}$

$p_1(t_3-t_1 | \text{event at } t_2) = p_0(t_2-t_1) \cdot p_1(\delta t) \cdot p_0(t-3-t_2)$

To remove the conditioning, we integrate over $t_2$ from $0 \ldots \tau$.

$p_1(t_3-t_1) = \int_0^\tau p_1(t_3-t_1 | \text{event at } t_2) dt_2$

$=\int_0^\tau e^_{-\lambda t_2} \cdot \lambda \cdot e^{-\lambda(\tau-t_2)} dt_2$

$=\lambda \cdot e^_{-\lambda \tau} \cdot \int_0^\tau dt_2$

$=\lambda \tau e^_{-\lambda \tau}$

$p_N(\tau | t_1 \ldots t_N) = p_0(t_1-t_0)\Pi_{n \in 2\ldots N} \lambda \cdot p_0(t_n-t_{n-1})$

$=\lambda^N e^{-\lambda \tau}$

$p_N(\tau) = \lambda^N e^{-\lambda \tau} int_{t_0}^{t_2}dt’_1 \int_{t_1}^{t_3}dt’_2 \cdots \int_{t_{n-1}}^{t_{n+1}}dt’_n \cdots \int_{t_{N-2}}^{\tau}dt’_{n-1}$

This integral is going to be nasty do evaluate. Therefore, we use a trick. Although I numbered the times of the events, $t_n$, consecutively, I will consider the indices to be arbitrary markers. This means that the $t_n$ can actually happen in any order. In order to keep the calculation correct, I will now also need to divide by the $N!$ different orders in which the events can happen. Now all of the $t_n$ can range from $t_0$ to $\tau$. This simplifies the integral so that we have

$p_N(\tau) = \frac{1}{N!}\lambda^N e^{-\lambda \tau} \times \ldots$

$int_{t_0}^{t_2}dt’_1 \int_{t_0}^{\tau}dt’_2 \cdots \int_{t_0}^{\tau}dt’_n \cdots \int_{t_0}^{\tau}dt’_{n-1}

which evaluates to

$ = \frac{1}{N!} \lambda^N \tau^N e^{-\lambda \tau} = \frac{(\lambda \tau)^N}{N!}e^{-\lambda \tau}$

### Conclusion

$p_N(\tau) = frac{(\lambda \tau)^N}{N!}e^{-\lambda \tau}$

# mathtest 3

\sqrt{\sigma}

\end{equation}

# math test 2

Math test 2

$\sqrt(\sigma)$

\\begin{equation}

\sqrt(\sigma)

\\end{equation}

\begin{equation}

\sqrt(\sigma)

\end{equation}

# mathtest

$\sqrt{\sigma}$

$\begin{equation}\sqrt{\sigma}\end{equation}$

\begin{equation}\sqrt{\sigma}\end{equation}