Wikipedia:Reference desk/Archives/Mathematics/2012 March 23

Mathematics desk
< March 22 << Feb | March | Apr >> March 24 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


March 23

edit

Is it possible to use a repeating decimal to fraction converter to store information?

edit

So I'm interested in the possibility of a human winrar data compressor and extractor. I have a friend who claims to have the ability. If it were possible for a human brain to function as a ZIP file, you could have a relatively small amount of data be stored in memory, and than extract that information through a simple yet repetitive algorithm step-by-step process.

So I was thinking one possible method of this would be reverse repeating-decimals. You could code the info into digits of 0-9 and then use a fraction-reverser to find the fraction of that set of repeating decimals. Then you would only have to memorize a single fraction, and you could later "extract" from that fraction to the process of simple division and recover your stored data.

Is this mathematically possible? --Gary123 (talk) 01:30, 23 March 2012 (UTC)[reply]

It depends whether there's any reason that the digits would be more likely to repeat than randomly chosen ones. If not, there's nothing to be gained this way. In general, file compression is of no use unless there's some kind of pattern in the data that makes it more ordered than characters chosen at random. 64.140.121.1 (talk) 02:55, 23 March 2012 (UTC)[reply]
Those memory masters with their decks of cards can't use very much compression as the decks are shuffled. They know that the cards they have already named can't occur again and some actually use that information which I think is amazing but really I think Lempel Ziv and suchlike are best left to the computers. Dmcq (talk) 11:09, 23 March 2012 (UTC)[reply]
"you would only have to memorize a single fraction" doesn't actually save you much work in general. If you consider a repeating decimal with n repeating digits, in general this will convert to a fraction with about n digits in the numerator and denominator. For example the repeating decimal x=.1756917569... converts to the fraction  . A similar form will always result- the repeating segment in the numerator and some number with digits all 9 in the denominator. In some cases the fraction will reduce, but even so it's typically not going to be any easier to remember than the repeating decimal itself. It could even be harder, since you now have to remember both the numerator and the denominator, which might be more total digits than the repeating decimal. Staecker (talk) 12:55, 23 March 2012 (UTC)[reply]


In general, any data compression method will increase (or at the very least keep the same) the number of pieces of information that someone will need to keep track of to encode arbitrary data. This follows mathematically from the pigeonhole principle. For example, there is slightly more than 11×1020 different ways to put together up to 20 decimal digits - however, there are less than 12×1019 ways of constructing a fraction where at most 19 decimal digits vary. As 12×1019 < 11×1020, that means not every input decimal string will have a unique corresponding fraction with at most 19 digits. In order to uniquely represent arbitrary data, you've swapped memorizing 20 decimal digits with memorizing (on average) at least 20 digits and the conversion algorithm.
That's not to say compression isn't useful if you restrict yourself to a subset of sequences. If you can limit yourself to encoding the sequences 142857, 285714, 428571, 571428, 714285, and 857142, you can do that easily with just one digit. But you're going to have a tough time encoding 718281 with the same scheme. (Conversely, another non-fraction scheme encodes 718281 in a single digit, but would make other sequences longer.) That's what makes standard lossless compression schemes work - while arbitrary data compresses poorly, the data (like text) you typically encounter is typically more limited, and has certain properties which the compression schemes specifically exploit. -- 140.142.20.101 (talk) 01:09, 24 March 2012 (UTC)[reply]

More direct proof?

edit

Theorem: For every natural number   with  , there exists   such that  .

Proof: the decimal number   repeats because the long-division algorithm has only finitely many possible states. The portion that repeats may not occur directly after the decimal. This has the effect of representing the number as  , where   and  . Therefore  , which implies  , and therefore  .

This is a pretty neat proof, but is there a more elementary number-theoretic approach to the same theorem? Would such an approach turn out to have essentially the same components as this proof, or are there proofs which are fundamentally different from this one? --COVIZAPIBETEFOKY (talk) 17:45, 23 March 2012 (UTC)[reply]

*facepalm*
It's just Euler's theorem. And I bet the proofs are essentially the same, because they both involve observing that something necessarily has finite order. --COVIZAPIBETEFOKY (talk) 18:14, 23 March 2012 (UTC)[reply]