Wikipedia:Reference desk/Archives/Mathematics/2007 April 18

Mathematics desk
< April 17 << Mar | April | May >> April 19 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


April 18

edit

A power series

edit

Is there/does anyone know a "closed" expression for the power series

 

?

Icek 07:27, 18 April 2007 (UTC)[reply]


 , where   is a Bessel function of the first kind.  --LambiamTalk 08:05, 18 April 2007 (UTC)[reply]

statistics:hypothesis testing, Type 1 and type 2 error

edit
  • “There is life in Mars”. Is it a statistical hypothesis? Explain
  • “Mistake and Error are supposed to be synonymous to each other, but they portray certain differences in the light of statistical inference.” describe in regards to type I and type II error. —The preceding unsigned comment was added by Ngawang1 (talkcontribs) 10:37, 18 April 2007 (UTC).[reply]
We don't do your homework here. Please at least make an effort to search wikipedia and/or google. You could look uat the entries Statistical hypothesis and error for a start. If you have any questions after you have made an initial effort please let us know and we'll be glad to help.-Czmtzc 12:23, 18 April 2007 (UTC)[reply]

Why do 1+1=2?

edit

It is the most stupid question I've ever asked,but I still can't get the answer. I've seen the meaning of '1' and '+' in the wikipedia as the 'natural number following 0 and preceding 2' and 'combining or adding two numbers to obtain an equal simple amount or total'. But there is still an empty space in my brain that I can't fill:why 1+1 must be 2,not others? I'm not asking about the glyph, but the nature of '1','=' and '+', and what'1+1=2" actruely means.--lowerlowerhk20:10, 18 April 2007 (UTC)[reply]

That's because we've defined "2" to be 1+1. Or rather, we'ved defined 2 to be the successor of 1, successor meaning the integer we obtain when we add 1 to another integer. In that case, 2 = Succ(1) = (1) + 1 = 1 + 1. Similarly, 3 = Succ(2), 4 = Succ(3), .... With 2+2 = 4, seems like one way is to prove 2+2 = 2+ (1)+1 = 3+1 = Succ(3)=4. What do you know, we have an article on that type of thing: First-order arithmetic. Root4(one) 13:22, 18 April 2007 (UTC)[reply]
BTW, if you can tolerate the more annoying portions of Gödel, Escher, Bach (the book), you will find a pretty good introduction to the concept. Root4(one) 13:29, 18 April 2007 (UTC)[reply]
Suppose you decide to invent arithmetic from scratch, just for the fun of it. You start with a binary operation, which you arbitrarily call "+". You want stuff for this operation to operate on, so you dream up something that you arbitrarily call "0". You decide that you want "+" to have an identity element, and this might as well be 0. So 0+0=0. Congratulations - you have invented the trivial group. You could stop there, but you decide to dream up another element - let's call this one "1". You know that 1+0=1 and 0+1=1 because 0 is the identity element for +. What about 1+1 ? Well you could decide that 1+1=0 - in which case you have invented the cyclic group C2. Or you could decide that 1+1=1 - in which case you have invented Boolean algebra. Or you could decide that 1+1 is equal to something else - what shall we call it ? - I know, let's call it "2" ... Gandalf61 13:34, 18 April 2007 (UTC)[reply]
  • Of course, things get more tangled when you ask: "why does 2+1=3?", since now you need to worry about whether your operator is associative and/or commutative... Oh what a tangled web in which we stick, when first we try to derive arithmetic --- Charles Stewart(talk) 13:48, 18 April 2007 (UTC)[reply]
Perhaps an article on this should be created. —Bromskloss 14:22, 18 April 2007 (UTC)[reply]
I agree.I've yahooed and gooled and there is no well-organized mathematical proof of the idendity '1+1=2' in the internet. —lowerlowerhk 14:25, 18 April 2007 (UTC)[reply]
So do I. Something so simple and yet so poorly understood is definitely worthy of an article all of its own. Algebra man 20:19, 18 April 2007 (UTC)[reply]
Questions like this about the foundations of mathematics require some sophistication to handle properly. Our inquiries can lead to surprising — even disturbing — conclusions. (The most famous example is Gödel's incompleteness theorems.) The ancient Greek geometers, by way of Euclid, taught us the value of approaching mathematics with axioms and theorems; we also must pay careful attention to our rules of inference. Mathematicians were surprisingly slow to find a satisfactory treatment of numbers, but today we would typically answer your question using the Peano axioms.
A formal approach through axioms allows us to prove that 1+1 = 2, which is one kind of answer. What it does not do is tell us why we should choose this particular concept. (In fact, computer logic can be built on a system where 1+1 = 0 instead.)
An alternative to the Peano approach uses sets. The idea of "1" somehow abstracts a common property of sets like {a} or {Jean} or {Tuesday}, and the idea of "2" does the same for {x,y} or {Ted,Alice} or {Saturday,Sunday}. Pursuing this approach can lead us to define cardinal numbers.
Equality for cardinal numbers has intuitive appeal; we define it in terms of matching elements. Thus we can match a with "Tuesday", with no elements unmatched; or we can match x with Saturday and y with Sunday. But if we match "Jean" with Alice, we have Ted left without a partner. Thus we define "1" and "2" and "=" and learn that "1 = 2" is false.
Addition takes us into new territory. One way to capture the idea with cardinal numbers is to merge two representative sets (with no elements in common), then count the result. We could merge {a} with {Tuesday} to get {a,Tuesday}, then match a with Alice and Tuesday with Ted. Thus we discover that "1+1 = 2".
Whether this kind of explanation will help fill the empty space in your brain you complain about, that I cannot say. If we're honest, we admit that one plus one equals two because we want it that way, and we want it that way because we find it useful. Mathematics, whether elementary or advanced, is largely about noticing patterns — either in the physical world or within mathematics itself — and creating formal definitions that allow us to describe and use those patterns. For example, advanced mathematics would say that the integers with addition and negation form a group, the name we give a really simple pattern that is enormously useful. --KSmrqT 14:44, 18 April 2007 (UTC)[reply]

I did save a proof before for this, starting from Peano's axioms. The natural numbers consist of a set N together with a "successor function" f() such that:

  1. There exist a unique member of  , called "1", such that f is a bijection from  -{1} to  .
  2. If a set,  , contains 1 and, whenever it contains a member, n, of  , it also contains f(n), then  =  . (This is "induction")

We then define "+" by: a+1= f(a). If b is not 1 then b= f(c) for some c and a+b is defined as f(a+c).

Since 2 is DEFINED as f(1), it follows that 2= f(1)= 1+ 1.

In 2+ 2 = 4. We DEFINE 3 as f(2) and 4 as f(3). 2=f(1) so 2+2= f(2+1). But 2+1= f(2)= 3 so 2+2= f(3)= 4! So. Yeah. [Mαc Δαvιs]18:16, 18 April 2007 (UTC)[reply]

Addition covers much of this. There is another way to view this, in terms of the development of our mathematical understanding. Today I found a reference to something called the Bo Peep theorem[1], possibly the earliest theorem man invented, its a precursor to the idea of numbers and counting and addition. Counting is needed to gain the concepts of 1 and 2, when your counting two comes after one, this is what is referred above by successor. After counting comes counting on, start with five count on three and get eight, almost but not quite addition. Then you need a concept of cardinality, that is - how many elements there are in a set, and link counting to cardinality. Then establish that the cardinality is invariant under reordering the elements, and that two set can be given a one to one correspondence if the cardinality is equal. Now we are at a stage where we can say 1=1 and 2=2. Finally establish the process of joining two sets together, and find that the cardinality of the the joined set, is the same as the process of counting on from the cardinality of one set to the cardinality of the second set. It takes us until about seven to establish all these results. --Salix alba (talk) 19:41, 18 April 2007 (UTC)[reply]
It's possibly still worthwhile having an article on the equation itself, as it is quite a fundamental (in several senses of the word) component of arithmetic and other mathematics, and I'd say there's an argument for it similar to the arguments for having the articles on 0.999..., 1-2+3-4+... and similar. We have an article at 1+1=2, although it's about a song (and looks like the actual title has a not equals sign in it), a dab on 2+2, and an article on Two plus two make five, which while not directly related do kind of suggest the appropriateness of a One plus one equals two article. Confusing Manifestation 04:02, 19 April 2007 (UTC)[reply]
I'd quite like to make addition into an A-class of FA, it reciently failed its A-class review, but is still better than average. The introduction has proved to be particularly dificult as its very hard to explain what addition means without very tecnical or circular definitions. 1+1=2 would seem to be the obvious article title, the current page could easily be moved to 1+1=2 (song). Principia Mathematica would probably be worth a link. --Salix alba (talk) 13:54, 19 April 2007 (UTC)[reply]

Addition is a physical theory discovered from observations. Try to discover it yourself. As a basis you need to be able to count. I assume you can count. Get a bunch of apples. Count 1 apple and place it in a box. Count 1 apple and also place it in the box. Count the apples in the box. You will get 2. Try it again, empty the box. Count 2 apples and place them in the box. Count 3 apples and place them in the box. Behold! You will get 5 apples. Try a little more and you will find a general rule by induction.

Counting is a physical measurement procedure to measure numbers, not any different from measuring distances or times. Now Addition gives you predictions about the results of countings based on the results of other countings and is therefore a physical theory. It is contrary to popular belief falsifiable. Maybe you will find sometime that 100 and 15 apples give 114 apples, not 115 as the theory predicts. You might doubt that addition is wrong, you might even be unable to imagine a world where addition is wrong, but you can actually perform an experiment to check it.

Mathematicans generally reject this view and prefer to believe in Platonic idealism and they declare that addition is an absolute truth about numbers that holds in every possible world. However they already have been wrong with that claim in another field, namely Euclidean Geometry, which was believed to produce absolute truths about measuring distances. Since the discovery of Special relativity the theory of Euclidean Geometry is falsified. —Preceding unsigned comment added by 84.187.60.68 (talkcontribs) 23:39, 2007 April 19

I believe I covered this already. Most mathematicians and physicists carefully distinguish between mathematical truth and physical reality, so your "holds in every possible world" is a claim not true in this world. Anyway, the experimental results may be different if we use raindrops or rabbits instead of rocks. --KSmrqT 00:38, 20 April 2007 (UTC)[reply]
I just discovered the article on Philosophy of mathematics, which is really excellent on that topic.
I don't remember whether it was Phoenix Guards or Five Hundred Years After, but there was a book I read awhile ago that had a gem of a conversation in it on this very topic. I'll try to paraphrase.
Person A: How long does it take you to make one of them?
Person B: About two hours.
Person A: And we need three, so it'll take about six hours.
Person B: I've often observed that it takes six hours to make three, but how in the world did you know?
Person A: Didn't you know? I'm an Arithmetician. I know it's an unseemly skill for a gentleman, but it has its uses.
The reason we know it works is that, every time you do it, it works. The reason it makes sense you can demonstrate with pennies. In fact, that's the classic way to teach addition. I've got some, you've got some. We each count. I find I have 5, and you find you have 7. Now, you give me your coins and I recount. I find I have 12. Any time I try to combine 5 with 7 again, I know I will move each object in the same way, and count them the same way, and will therefore get the same answer. Anything that smacks of set theory is just a formal way of shuffling pennies around. Black Carrot 08:46, 21 April 2007 (UTC)[reply]

Equvalence relation vs. partition

edit

Since an equivalence relation seems to be a partition by another name, when should I use which? —Bromskloss 14:15, 18 April 2007 (UTC)[reply]

An equivalence relation R on a set X would be a subset of   that satisfied certain properties (e.g.,  , etc.), whereas a partition of X is a collection of subsets Ai of X such that   whenever   and  . Does that help? I see the two as completely different entities, but maybe I don't quite understand where you're coming from. –King Bee (τγ) 14:36, 18 April 2007 (UTC)[reply]
Yes, they are different entities, but they go hand-in-hand - an equivalence relation induces a partition on the underlying set, and a partition induces the equivalence relation "xRy iff x and y are in the same sub-set" (well, certainly true for finite set - I guess there might be some axiom of choice issues with infinite sets). Gandalf61 14:48, 18 April 2007 (UTC)[reply]
Indeed; good point. –King Bee (τγ) 14:54, 18 April 2007 (UTC)[reply]
There is a bijection between the two, but they have different "types"; using P(S) to denote the power set of S, the partitions of A are an element of P(P(A)), whereas an equivalence relation on A is an element of P(A×A). Such one-to-one correspondences between different entities are not uncommon in mathematics, but as they support different operations, and usually, in the context, there is a reason to use certain operations more than others, this often makes one preferable over the other. If you're using quotient sets, then certainly the usual notion to use is equivalence. If there is no rhyme or reason behind the carving up, and certainly for finite sets, "partition" is the usual notion. In combinatorics, people always count partitions, never equivalence relations.  --LambiamTalk 15:30, 18 April 2007 (UTC)[reply]
BTW, the quotient set is often defined to be the set of equivalence classes, hence is actually formally equal to the partition. But this consideration of set-theoretic realizations is inappropriate, the quotient set should be viewed as a quotient object in the category-theoretical sense.--80.136.172.150 21:17, 18 April 2007 (UTC)[reply]
I didn't know the categorical thought police is so stern that we are not even supposed to consider set-theoretic realizations. :( Whether 'tis nobler in the mind to suffer / The objects and arrows of general abstract nonsense...  --LambiamTalk 21:27, 18 April 2007 (UTC)[reply]
So my wording was inappropriate. What I wanted to express was that set-theoretic realizations can be misleading. Nobody (except very few specialists) cares whether you take {a,{a,b}} or {{a},{a,b}} or something else as your definition of an ordered pair, as long as (a,b)=(c,d) <=> a=c and b=d remains true. Similarly, it is mostly irrelevant whether you choose the set of equivalence classes as definition of the quotient set or a set of representatives, as long as there is a surjective map whose fibres are precisely the equivalence classes.--80.136.131.196 18:11, 19 April 2007 (UTC)[reply]
Try to use partition if you want to emphasize that the whole set is covered and relation if you want to emphasize which particular elements are in the same group together. It is ultimately just a matter of taste.

Partial knowledge

edit

This arose from some musing on people having partial knowledge of a numerical key.

First, a particular case - there are 5 cards face down, numbered 1 to 5 and each having a symbol on the hidden side. The nature of the symbol is irrelevant, the point is to know just what is hidden. Is it possible to give each of 5 people the knowledge of what symbol is on particular cards, so that no two people have complete knowledge but any three do?

Generalising, consider m such cards and n people. Is it possible for the people be given knowledge of some cards so that no set of r people have complete knowledge but every set of r+1 do?…86.132.165.194 15:26, 18 April 2007 (UTC)[reply]

I do not believe it is possible. I'm not sure if this is an actual proof, but here is my reasoning: Because every three people have to have complete knowledge, every card must be given to three people (Getting three people together is the same as excluding two people. If any of the cards was only given to two people there would be a chance that those two would be excluded. No such chance exists if every card was given to three persons).
However, since no two people should have complete knowledge, every pair of two people must have a shared blank in their knowledge, a card that neither of them knows the value of. To do this, three people would have to have three blanks and two people would have to have two blanks (X means Blank):
   1 2 3 4 5
 A X X X
 B   X X X
 C     X X X
 D X   X
 E X     X

As you can see there simply aren't enough open fields for cards 1 and three to distribute them to three persons. Even simpler, for A to have a shared blank with D and E, yet for them not to have the same list of blanks, D and E must both have card 1 as a blank. Add A's blank to that and you have three blanks for card 1 so only two open spaces are left, not enough to distribute the card to the required three persons. --86.87.66.216 17:11, 18 April 2007 (UTC) (Excuse the extremely ugly table, I hope you can see what I mean with it)[reply]

That looks definitive for m=5 cards, n=5 people and r=2, the threshold set size between cannot and must have complete knowledge. Are there particular values of m,n and r that are feasible, and is it possible to obtain a general rule?…86.132.165.194 18:30, 18 April 2007 (UTC)[reply]
The number of different symbols seems to be irrelevant – as long as there are at least two. Using the same reasoning as given above, for each selection of r people out of n, there has to be at least one card whose symbol no-one among these r knows. If we add any of the remaining n−r participants to form a group of r+1 people, then the extended group now knows the card's symbol. So each of the n−r participants does know the symbol on that card. In the matrix, there are therefore exactly r blanks in the column of each card. So conversely, given a card, there is a unique group of r people who have been withheld the knowledge of its symbol. In other words, not only does each group of participants of size r have at least one mystery card, but the group also does not have this specific lack of knowledge in common with any other group of size r. Therefore, the number of cards m has to be at least equal to the number of different ways you can pick r elements from a set of n elements, which is known as the number of combinations and is given by binomial coefficients: the numbers you find in Pascal's triangle. In a formula:
 
Provided that r < n, this inequality should not only be necessary but also sufficient. For the original problem with n = 5, r = 2, this means m ≥ 10.  --LambiamTalk 21:15, 18 April 2007 (UTC)[reply]


That looks OK. So, for a balanced solution, m = nCr and every column (card) has r blanks, every row (person) has r*m/n blanks. For a given number of people, extra cards are accommodated by replicating existing ones in terms of which people know the symbol. Only when the number of cards is double, treble, ... the minimum value of nCr, will each person know the same number of cards. I think.…86.132.232.155 12:51, 19 April 2007 (UTC)[reply]

You can just give everyone knowledge of any extra cards; then everyone continues to carry the same number. (I had this and much more to say earlier, but Lambiam just went and solved the problem (curses!). I'm glad this much is still useful!) --Tardis 22:48, 19 April 2007 (UTC)[reply]

The answers assume that you can either exactly know the symbol of a card or know nothing about the symbol of a card. I don't know your actual situation (I suppose you are not really interested in playing cards but use this as a model?), but in general, this assumption is invalid. Think about this: Threre are two symbols, red and green. Player A flips a coin and tells the result of the coin throw player B, suppose it was head. Player B looks under the first card and sees a green symbol. Then B tells Player C: "There is a green symbol if A threw head and a red symbol if A threw tail." Now A and C together know that there is a green symbol under the card, but neither A nor C knows it alone. You can create all kinds of models like this, sometimes assuming the players are trustworthy, sometimes not. I would recommend Game theory, Cryptography, Information Theory and Communication complexity but it just looked there and wikipedia seems to be very shallow on game theory.