Talk:Probabilistic method

Latest comment: 2 years ago by Renaud Pacalet in topic Counting version of the first example

History

edit

I removed the reference to the 1959 origins of the method because other examples came earlier, but I'm not sure when the actual origins are. Some sources sited Erdos' 1947 paper, others Szele's hamiltonian cycle result. Quite a few places seem to dodge the issue entirely (perhaps because the line between a "probabilistic" and a "counting" proof is so vague?

A striking example, dating back to 1909, is the normal number theorem, due to Émile Borel, that could be one of the first examples of the probabilistic method, providing the first proof of existence of normal numbers, with the help of the first version of the strong law of large numbers. I am not convinced that the terminology should be limited to results in combinatorics (perhaps to the favourite topics of Erdös, including number theory). Chassain (talk)12:59, 16 July 2011 (UTC)Reply

Article Probabilistic proofs of non-probabilistic theorems

edit

What is the connection of this article with this entitled "Probabilistic proofs of non-probabilistic theorems"? Pierre de Lyon (talk) 16:00, 2 May 2009 (UTC)Reply

I would say this article is about probabilistic proofs in combinatorics, whereas the other article is a list of examples in other areas of mathematics. Charvest (talk) 19:05, 2 May 2009 (UTC)Reply

Expected value

edit

In the title and definition you write only about probability. However, both examples use Expected value . Is it accidentally? ru:МетаСкептик12 — Preceding unsigned comment added by МетаСкептик12 (talkcontribs) 10:41, 27 July 2012 (UTC)Reply

Questions about second example

edit

1. Why is the number of cycles of length i in the complete graph on n vertices is   ? I thought it should be   because each group of i vertices determines a different cycle.


2. You prove that two properties hold with positive probability, but how do you know that there is a graph that has both these properties? You cannot multiply the probabilities because the two properties are not necessarily independent!


--Erel Segal (talk) 09:58, 10 November 2014 (UTC)Reply


My answer to 1.: To list all cycles of length i, consider this process: Start in an arbitrary vertex (n choices), then go to one not yet visited: n-1 choices remain, n-2 for the vertex after that, etc. until n-i+1. This gives n(n-1)(n-2)...(n-i+1) = n!/(n-i)! possibilities. However, we count cycles several times over. E.g. consider a cycle (1,2,3): it's the same as (2,3,1) and (3,1,2) (just shift the list, and imagine it's circular: the end is attached to the front). Thus, divide this number by i for all shifts. But we can also go the cycle in the other direction: (3,2,1), and by shifting, we get (2,1,3) and (1,3,2). Thus, we have to divide the number by 2 for the two directions.
I have to admit it took me a few minutes to fully understand the formula. Flowi (talk) 00:13, 26 February 2017 (UTC)Reply


My answer to 2.: (I think the article is not very clear on that and I will try to fix this)
We have proved something stronger than positive probabilities: We have established zero sequences (o(1)) in both cases. Thus, for sufficiently large n, the probability for the complementary events is > 1/2 in both cases. These two events cannot be mutually exclusive, as then the probabilities would sum up to more than 1. Thus, the intersection of the two events occurs with nonzero probability. So, a graph with both properties exists.
I checked with "The Probabilistic Method" (4th ed.) by Alon and Spencer, and they also use this "> 0.5" argument, though they are extremely brief. You find it in the chapter "THE PROBABILISTIC LENS: High Girth and High Chromatic Number" (maybe we should add it to the list of references because the Erdös paper looks, at a short glance, overly complicated, and it may use a less streamlined argument?)
Flowi (talk) 01:01, 26 February 2017 (UTC)Reply

Counting version of the first example

edit

In the counting version of the first example there are   r-subgraphs and 2 "bad" colorings per r-subgraph but how can we conclude from this that: "the total number of colorings that are bad for some (at least one) subgraph is at most  "? For any of these subgraphs and for any of its two "bad" colorings there are   different colorings of the other edges. So shouldn't the upper bound of the total number of "bad" colorings be  , instead?

If we use this upper bound instead of  , a necessary condition for having at least one "good" coloring is that this upper bound is less than the total number of colorings:  , that is,  , which is the same as with the probabilistic method. --Renaud Pacalet (talk) 09:44, 18 May 2022 (UTC)Reply