Talk:Examples of Markov chains

Latest comment: 7 years ago by 93.102.61.32 in topic Predicting the Weather Markov chain example

Imprecise statement

edit

Note: The following in this entry is, apparently, imprecise: "In a game such as poker or blackjack, a player can gain an advantage by remembering which hands have already been played (and hence which cards are no longer in the deck)". In most forms of poker currently spread, the deck(s) are reshuffled between hands, making card counting as such pointless. In poker, however, it is very advantageous to note patterns in player behavior (betting tendencies, hands raised pre-flop for games such as omaha and hold'em, general aggression, etc., etc.). — Preceding unsigned comment added by 68.197.158.187 (talk) 02:51, 19 June 2007 (UTC)Reply

Blackjack and Markov Chains

edit

I disagree with the claim that blackjack cannot be treated as a Markov chain because it depends on previous states. This claim relies on a very narrow idea of what the current state is: i.e. what your current cards are and what the dealer's cards are. However, if you expand the current state to include all cards that have been seen since the last shuffle or conversely all the remaining cards in the deck, then the next card drawn depends only on the current state. This is similar to the idea of a "Markov chain of order m" as described in the Markov Chain article: you can construct a classical (order 1) chain from any order m chain by changing the state space.

The problem is that the next state depends on the decision of the player; so, you have to add a player strategy, which may be randomised, before you can make it a Markov chain. Therefore, you can investigate the value of a strategy by looking at the hitting probability of the "Win" states integrated over all possible starting states. --Nzbuu 12:16, 30 November 2007 (UTC)Reply

Monopoly

edit

Actually, is Monopoly not also slightly non-Markovian? There is a certain amount of 'memory' in the sense that the 'Chance'-cards are not shuffled after every draw (so the chance of drawing a certain card that has not yet been drawn is higher). --Victor Claessen 11:31, 2 December 2008 (UTC)Reply

Players also decide whether to buy a property (if available) when landing on it, and whether to build houses on properties they own. These aspects don't seem to fit the Markov model. 24.46.211.89 (talk) 19:03, 30 October 2009 (UTC)Reply

Stable vs Oscillatory Models

edit

The examples given are nice; however, I think one behavior of these models is missing: oscillation. A good example of a transition matrix that oscillates rather than converges would provide a more robust example. —Preceding unsigned comment added by 96.248.79.229 (talk) 16:02, 16 April 2011 (UTC)Reply

Predicting the Weather Markov chain example

edit

The eigen vector is good (sums to 100%) but the weather graphs is weird:

  • going to sunny state has a probability higher than 100% (130%)
  • Leaving the rainy state has probability of 100% (0.5 +0.5)

Generally we would expect all ins and all outs sums to 100% (exclusive states). It seems the author has preferred to have an eigen vector where its components sums to 100%, but not the intermediate ones In another wikipage: https://en.wikipedia.org/wiki/Markov_chain, stock market, we have a different example where all ins and outs sums (raws and columns of the matrix) to 100%, but in this case, the eigen vector sums with a probability of 187.5%...

It would be nice to provide the reader some explanations why the probability over the state space becomes > 100%, while we are tough in the 101 probability lessons it should sum to 100% (for the sake of consistency).. — Preceding unsigned comment added by 93.102.61.32 (talk) 17:29, 22 July 2017 (UTC)Reply

removed — Preceding unsigned comment added by 93.102.61.32 (talk) 17:54, 22 July 2017 (UTC)Reply