Talk:Shannon's source coding theorem

Latest comment: 7 years ago by 177.138.210.163 in topic What does the E in ES mean?

What does the E in ES mean?

edit

Could someone explain the 'E' in 'ES' please? Centroyd (talk) 12:57, 19 September 2016 (UTC)Reply

I believe it is the expected value operator. If so, ES denotes the expected value of the random variable S, although the correct presentation would be  , I guess. If someone confirms, please change it.
177.138.210.163 (talk) 03:20, 25 March 2017 (UTC)Reply

Reorganised/Merged

edit

I suggest the 'Shannon noiseless source coding theorem' article be merged with this 'Source coding' article.

--Hpalaiya 22:54, 19 May 2006 (UTC)Reply

this is a separate large article. maybe a summary of this article might be appropriate on the source coding page - strong disagree

—Preceding unsigned comment added by 81.104.79.141 (talkcontribs) 16:18, 21 December 2006

Merge carried out, having first:

Some more clean-up to do, to blend the presentation of the two theorems more closely. (Detailed editing not yet started).

-- Jheald 22:00, 6 March 2007 (UTC).Reply

I don't think that the   that keeps showing up is ever explained/defined in this article. A quick explanation as to what it is would go a long way for someone that is not familiar with the material already.

Jeejaw (talk) 02:24, 11 September 2009 (UTC)Reply

An alternative proof for the symbol code case

edit

Applying the Jensen's inequality for the expression   we can have a direct proof without using any  's:

 

Using the Kraft's inequality   on the right side:

 

--Vamos (talk) 20:12, 18 November 2009 (UTC)Reply

Source coding theorem for symbol codes is wrong?

edit

As far as I can see, Source coding theorem for symbol codes is wrong or at least is not accurate. Here is the counterexample: let  ,  ,  ,   for other   (  is the empty word), X is 0 or 1 with equal probability. Then, it is clear that f is decipherable code, but  . There is also a counterexample that does not use the empty word. Could someone help to state this theorem more accurate? Alexei Kopylov (talk) 17:30, 5 May 2010 (UTC)Reply


Well Alexei, Using any zero cost symbol (empty symbol), a message longer than a single symbol will not be uniquely decodable. It contradicits to the Kraft's inequality which is a necessary condition for decodability. Vamos (talk) 17:19, 12 May 2010 (UTC)Reply

Oh, thanks! The problem was that the statement was about decipherable code, and there were no definition what decipherable code was. I changed it to uniquely decodable code, with the wiki link. See my change Alexei Kopylov (talk) 01:06, 14 May 2010 (UTC)Reply

Practical example

edit

Why is no practical example included for the less mathematically inclined? I often find a practical example can greatly help to illustrate abstract concepts, and I think this would greatly enhance the usefulness of the article. — Preceding unsigned comment added by 145.18.212.47 (talk) 08:31, 27 June 2013 (UTC)Reply

Why not drop "with negligible probability of loss"?

edit

Shannon's original version is 100% reliable. Source sequences that are not typical can be encoded with longer code words. That does not hurt the approximation much, because the probability of atypical sequences is so small. Wstomv (talk) 22:00, 4 December 2015 (UTC)Reply