Hello world!

Posted by Ariel Balter on December 19, 2009 in Uncategorized | Short Link
1 Comment on Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start blogging!

Continuous Time Markov Chains

Posted by Ariel Balter on December 19, 2009 in Uncategorized | Short Link
No Comments on Continuous Time Markov Chains


We will define and show some basic properties of Markov chains (MCs), and then discuss continuous time MCs.

Markov Chains

MCs are descriptions of discrete states linked by transitionprobabilities. Suppose a system can be in three states: $A$, $B$, and$C$. As time goes on, this system transitions from one state toanother. For instance, $A \to C$ followed by $C \to A$ followed by $A\to B$. By observing this system we can approximately measure the probabilities or rates at which the system transitions occur.
If the system evolves deterministically, then it willspend exactly a specific amount of time in one state beforetransitioning to another state. The inverse of this time is the ratefor the transition.
If the system evolves randomly, then it may remainin a state for a random amount of time before transitioning. In thiscase, we have transition probabilities rather than rates.
Two convenient representations of MCs are employed. The first is graphical. Essentially, a dependency graph. Graph elements representing individual states are connected with arrows representing possible transitions. I’ll make one to show as soon as I install and learn to use canviz.
The second representation is in matrix form and lends itself to mathematical analysis. Both rows and columns represent the different states of the system, thus the matrix is square. Each entry describes the transition from the row state to the column state. This description will be different depending on whether the the system evolves deterministically, randomly, and/or in discrete or continuous time (more on that later).
Here is a matrix that represents all the transitions in our hypothetical system:\begin{bmatrix}0.1 & 0.7 & 0.2 \\0.8 & 0.1 $ 0.1 \\0.4 & 0.3 & 0.3 \\\end{bmatrix}


For many important applications, systems that can be modeled by MCs evolve in a memorylessfashion. Actually, few MCs that are not memoryless can easily beanalyzed in closed form, which is probably why we like model systemsamenable to memoryless MCs.

Discrete Time Markov Chains

In a deterministic system there can only be one possible transition from each state. Otherwise, how would it choose which way to go? The next step in randomness is to have a number of possible transitions from each state occurring randomly with measurable probabilities. For example, the transition $A \to B$ might occur with a probability $p_{A \to B} = 0.2$ and the transition $A \to C$ occur with probability $p_{A \to C} = 0.7$.In this model, we have no sense of physical time advancing. Instead, we index successive states of the system by some integer, say $i$ or $n$ or $\alpha$. This is called a discrete time Markov chain (DTMC).Notice that the probabilities in my example add to $1$. When transitioning out of the state $A$, the system has to go somewhere! We will see an interested modification of this when we consider events in continuous time.

jsMath problems in blogger

Posted by Ariel Balter on December 18, 2009 in Uncategorized | Short Link
No Comments on jsMath problems in blogger

Inline math

This is an example of $\frac{1}{\sqrt{2 \pi \sigma^2}}$ inline math


This is an example of $\frac{1}{\sqrt{2 \pi \sigma^2}}$ inline math

Equation Environment

p(x) = \frac{1}{\sqrt{2 \pi \sigma^2}} e^-x^2/{2 \sigma^2}


\begin{equation}p(x) = \frac{1}{\sqrt{2 \pi \sigma^2}} e^-x^2/{2 \sigma^2}\end{equation}

Shorthand Equation Environment

$$p(x) = \frac{1}{\sqrt{2 \pi \sigma^2}} e^-x^2/{2 \sigma^2}$$


$$p(x) = \frac{1}{\sqrt{2 \pi \sigma^2}} e^-x^2/{2 \sigma^2}$$


Poisson Processes

Posted by Ariel Balter on December 17, 2009 in Uncategorized | Short Link
No Comments on Poisson Processes


Probability distribution, at time $t$ for $N$ events in a subsequent time interval $[t, t+\tau]$: $P[N,\tau,t]$. As we shall see, as a practical matter, we primarily focus on calculating $P[1, \tau, t]$.


  1. For a small time interval $h$, as $h \to 0$, $P[1,h,t] \to \lambda h$ for all $t$ and $P[N>1,h,t] \to 0$. Therefore $P[0,h,t] \to 1 – \lambda h$.
  2. $P$ is independent of $t$: $P[N, \tau, t] = P[N, \tau]$ for all $t$. In other words, it is stationary.
  3. The probabilities of events in disjoint intervals are independent. If $t_2-t_1>\tau_1$ then,
    $P[N_2,\tau_2,t_2 ; N_1,\tau_1,t_1] = P[N_2,\tau_2,t_2] \cdot P[N_1,\tau_1,t_1]$
    Alternatively, consider four successive time points $t_4 \geq t_3 \geq t_2 \geq t_1$. Then:
    $P[N_2,t_4-t_3,t_3 ; N_1,t_2-t_1,t_1] = P[N_2,t_4-t_3,t_3] \cdot P[N_1,t_2-t_1,t_1]$
  4. $P$ is memoryless. We might feel as though having already waited a long time for an event to occur that the next event should happen sooner. However, since $P$ is memoryless, it does not remember how long it took for the last event to occur.
    Note that this is not the same as being stationary.

Derivation of $P[0,\tau]$

We wamt to derive an expression for $P[N,\tau]$ using the assumed properties listed above. To do so, we start by looking at probabilities for 0 events in time intervals. As an aid in understanding, think about the following situation: you start standing at a bus stop at a certain time $t_1$. You wait until a later time $t_2$ and observe that no bus has arrived. For some reason, you stick around until an even later time $t_3$ and notice that a bus has still not arrived. This is strange, because you know that the buses arrive in a way that obeys to assumptions of the Poisson process.Mathematically, consider three successive times $t_3>t_2>t_1$, and the joint probability
$P[0,t_3-t_1 ; 0, t_2 – t_1]$
By the independence of events in disjoint intervals,
$P[0,t_3-t_1 ; 0, t_2 – t_1] = P[0,t_3-t_2] \cdot P[0,t_2-t_1]$

Alternatively, due to the memoryless assumption, we have

$P[0,t_3-t_1 ; 0, t_2 – t_1] = P[0,t_3-t_1]$
This may seem counterintuitive, but it can be understood if we think abou the fact that we have arbitrarily divide the interval $t_3-t_1$ into two by inserting $t_2$. Doing so should not alter the probability! Therefore, we have the above expression.
Combining the two above, we have
$P[0,t_3-t_1] = P[0,t_3-t_2] \cdot P[0,t_2-t_1]$
This expression in hand, we now derive the analytical form for $P[0,\tau]$. To do so, we differentiate the equation by $t_2$. Along the way, we adopt some abbreviated notation that should be clear enough
$0 = p_0′(t_3-t_2) \cdot p_0(t_2-t_1) + p_0′(t_3-t_2) \cdot p_0(t_2-t_1)$

From this we conclude that

$\frac{1}{p_0(t_3-t_2)}\frac{dp_0(t_3-t_2)}{d(t_2)} = \frac{1}{p_0(t_2-t_1)}\frac{dp_0(t_2-t_1)}{d(t_2)}$

Using a standard technique in separation of variables, we note that the two sides of the equation are respectively functions of only $p_0(t_3-t_2)$ and $p_0(t_2-t_1)$. Therefore each side is separately equal to a constant, $\lambda$. Writing $t_2-t_1$ as $\tau$,

$\frac{1}{p_0(\tau)}\frac{dp_0(\tau)}{d\tau} = \lambda$

which has the unique solution

$p_0(\tau) = e^{-\lambda \tau + C}$

Since $p_0(0) = 0$,

$p_0(\tau) = e^{-\lambda \tau + C}$

Derivation of $P[N,\tau]$

From $p_0(\tau)$ we can proceed to calculate an analytical expression for $p_n(\tau)$ (we will use this notation from now on).
There are many approaches to deriving an analytical expression for $p_n(\tau)$. One way is to solve the “master equation”, or “birth and death” equation by one of many methods. One can use generating functions, induction, or other approaches. I found a much more satisfying approach online notes.
Let $p_n(\tau)$ denote the probability of n events in an interval $\tau$. We have just derived that

$p_0(\tau) = e^{-\lambda \tau}$

Next, we derive $p_1(\tau)$ from $p_0(\tau)$. Consider a time interval $\tau=t_3-t_1$. Let the single event occur at a time $t_2$. We don’t need to be explicit about when the event occurs. Therefore we can place the event, which takes place in a vanishingly small time $\delta \tau$ anywhere in the interval. Based on the independence of disjoint intervals, we can immediately write

$p_1(t_3-t_1 | \text{event at } t_2) = p_0(t_2-t_1) \cdot p_1(\delta t) \cdot p_0(t-3-t_2)$

To remove the conditioning, we integrate over $t_2$ from $0 \ldots \tau$.

$p_1(t_3-t_1) = \int_0^\tau p_1(t_3-t_1 | \text{event at } t_2) dt_2$

$=\int_0^\tau e^_{-\lambda t_2} \cdot \lambda \cdot e^{-\lambda(\tau-t_2)} dt_2$

$=\lambda \cdot e^_{-\lambda \tau} \cdot \int_0^\tau dt_2$

$=\lambda \tau e^_{-\lambda \tau}$

We can now extend this process for $N$ events by further subdividing the interval. Let $\tau = t_N-t_0$ and $t_i$ be the time of the $i^{th}$ event.

$p_N(\tau | t_1 \ldots t_N) = p_0(t_1-t_0)\Pi_{n \in 2\ldots N} \lambda \cdot p_0(t_n-t_{n-1})$

$=\lambda^N e^{-\lambda \tau}$

Next we remove the conditioning on the $t_1 \ldots t_N$.

$p_N(\tau) = \lambda^N e^{-\lambda \tau} int_{t_0}^{t_2}dt’_1 \int_{t_1}^{t_3}dt’_2 \cdots \int_{t_{n-1}}^{t_{n+1}}dt’_n \cdots \int_{t_{N-2}}^{\tau}dt’_{n-1}$

This integral is going to be nasty do evaluate. Therefore, we use a trick. Although I numbered the times of the events, $t_n$, consecutively, I will consider the indices to be arbitrary markers. This means that the $t_n$ can actually happen in any order. In order to keep the calculation correct, I will now also need to divide by the $N!$ different orders in which the events can happen. Now all of the $t_n$ can range from $t_0$ to $\tau$. This simplifies the integral so that we have

$p_N(\tau) = \frac{1}{N!}\lambda^N e^{-\lambda \tau} \times \ldots$

$int_{t_0}^{t_2}dt’_1 \int_{t_0}^{\tau}dt’_2 \cdots \int_{t_0}^{\tau}dt’_n \cdots \int_{t_0}^{\tau}dt’_{n-1}

which evaluates to

$ = \frac{1}{N!} \lambda^N \tau^N e^{-\lambda \tau} = \frac{(\lambda \tau)^N}{N!}e^{-\lambda \tau}$


This concludes a derivation of the Poisson process. From the three assumptions stated initially, we derived the probability distribution for $N$ events in a time interval $\tau$:

$p_N(\tau) = frac{(\lambda \tau)^N}{N!}e^{-\lambda \tau}$

The Poisson process has many useful applications and also many interesting and useful properties, some of which we will discuss in later posts.

mathtest 3

Posted by Ariel Balter on December 17, 2009 in Uncategorized | Short Link
No Comments on mathtest 3

math test 2

Posted by Ariel Balter on December 17, 2009 in Uncategorized | Short Link
No Comments on math test 2

Math test 2



Posted by Ariel Balter on November 6, 2009 in Uncategorized | Short Link
4 Comments on mathtest


Copyright © 2009-2015 blabnotes All rights reserved.
This site is using the Shades theme, v2.4.1, from BuyNowShop.com.