Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Izumi Konata
May 4, 2012

by Ralp
A Markov chain is a stochastic process with the Markov property. The term "Markov chain" refers to the sequence of random variables such a process moves through, with the Markov property defining serial dependence only between adjacent periods (as in a "chain"). It can thus be used for describing systems that follow a chain of linked events, where what happens next depends only on the current state of the system.

In literature different Markov processes are designated as "Markov chains". Usually however, the term is reserved for a process with a discrete set of times (i.e. a discrete-time Markov chain (DTMC))[2] although some authors use the same terminology to refer to a continuous-time Markov chain without explicit mention.[3][4] While the time parameter definition is mostly agreed upon to mean discrete-time, the Markov chain state space does not have an established definition: the term may refer to a process on an arbitrarily general state space.[5] However, many uses of Markov chains employ finite or countable (or discrete on the real line) state spaces, which have a more straightforward statistical analysis. Because there are—besides time index and state space parameters—many other variations, extensions and generalisations (see Variations), the remainder of this article concentrates on the simplest discrete-time, discrete state-space case, unless mentioned otherwise.

The changes of state of the system are called transitions, and the probabilities associated with various state changes are called transition probabilities. The process is characterized by a state space, a transition matrix describing the probabilities of particular transitions, and an initial state (or initial distribution) across the state space. By convention, we assume all possible states and transitions have been included in the definition of the process, so there is always a next state, and the process does not terminate.

A discrete-time random process involves a system which is in a certain state at each step, with the state changing randomly between steps. The steps are often thought of as moments in time, but they can equally well refer to physical distance or any other discrete measurement. Formally, the steps are the integers or natural numbers, and the random process is a mapping of these to states. The Markov property states that the conditional probability distribution for the system at the next step (and in fact at all future steps) depends only on the current state of the system, and not additionally on the state of the system at previous steps.

Since the system changes randomly, it is generally impossible to predict with certainty the state of a Markov chain at a given point in the future. However, the statistical properties of the system's future can be predicted. In many applications, it is these statistical properties that are important.

Adbot
ADBOT LOVES YOU

  • Locked thread