Archive 1Archive 2Archive 3Archive 4Archive 5

The proof is invalid

The halting proof diagnolization argument implies an algorithm that simulates an input machine on it's own encoding. This is the realization of "diagonolization". This algorithm takes one input, copies it, and then gives the original input code the copy as input. It must do so in order to ensure that the program runs only on itself. The article calls this machine G.

Because of the fact that the halting algorithm needs algorithm "code" and input to that code, it is necessary that the halting algorithm traces the input machine to some degree. It must be traced a certain amount in order for it to gain meaningful information. I claim that the amount required ensures it would trace it through the action of copying the input and running H in the case of Machine G.

So, G recieves (G) as input. (G) is the code of G. G copies the (G) and gives (G)(G) to the halting algorithm. The halting algorithm traces G on (G). And we are back where we started - an infinite loop.

This is the reason for the contradiction arrived at, not the existability of a halting algorithm. —Preceding unsigned comment added by 96.32.188.25 (talk) 22:25, 18 May 2009 (UTC)

WRONG again. See refutations of the argument presented by similar anon contributors at Talk:Halting_problem/Archive3#Article_Halting_proof_fails (March 2009), Talk:Halting_problem/Archive3#proof_fails (November 2008). Talk:Halting_problem/Archive3#Proof_is_invalid. (August 2007), Talk:Halting_problem/Archive1#Big_Trouble_in_Proof.3F (2002) — Arthur Rubin (talk) 22:40, 18 May 2009 (UTC)

What if You traced state of variables (not constants, stack ...) at each position (instruction) of program. If that state at certain position repeated You have found infinite program. But only if You can trace these things, otherwise I do not know. But if You can Halting problem is solvable.
Of course running program which never halts and waiting for it to halt is BS ;). You'll wait forever. 84.16.123.194 (talk) 08:53, 26 May 2009 (UTC)
Only question is whether You can create such program that never finishes. Like enumeration of natural numbers, but of course no overflow is considered here. Also program which rewrites itself may be a problem on infinite(!) tape.
i = 1 ; while ( i < INFINITY ) { print i ; i := i + 1 } // if i never overflows => program never halts ; 
// comparing to INFINITY might be problem?
i = 1 ; while ( 0 < i ) { print i ; i := i + 1 } // this'd work ;)
I have seen some "proofs" of tested program call the testing program, WTF is that? Why should tested program know about him? I think that presuming such knowledge is trap.
It may be valid program, but it is quite a special case. Although the other program might be calling arbitrary programs (on arbitrary addresses, so that he may find tester) on a tape IRL and run tested program ...
I feel (=> don't have proof), that making one program call other program, which then calls first one, just for sake of creating recursion is not valid at all.
If I'd create testing program I'd be sure not to let the tested program know about tester, otherwise there's gonna (might => going to :) ) be circle and that is what I do not want.
To me turings "proof" is interesting thing to consider when making testing program, nothing more. AMIRITE?
Also it is quite paranoid because You may not know even IRL whether other program inside tested won't call tester, or whether some machine bug won't do that :). It is quite similar to paranoia in Ken Thompson's lecture ;) . 84.16.123.194 (talk) 09:30, 26 May 2009 (UTC)


This proof is not convincing because it fails to eliminate the possibility of an algorithm which can detect the special class of problems which would contradict the result:

1) halts 2) does not halt 3) recursively calls the halt algorithm to contradict it.

Proving the halting algorithm doesn't return true/false for the last case does not prove that a halting algorithm cannot exist for programs of the first two cases. Additionally, this proof assumes the halt algorithm is available to it, but Godel's work shows that an algorithm cannot analyze itself this way. In theory an algorithm which is external to the system could be created to solve the halting program for any pre-existing program in a closed system. Alan's proof isn't applicable. —Preceding unsigned comment added by 24.45.237.41 (talk) 02:53, 6 November 2010 (UTC)

What, for a single program? Sure. It either halts or it doesn't. Write two programs: one says it halts; the other says it doesn't. One of them is right.
But that isn't the problem for which a solution is requested. The requirement is, write a fixed program such that for any program you give it, the first program correctly predicts whether the second program halts. And that's what can't be done. --Trovatore (talk) 02:56, 6 November 2010 (UTC)
Why is it not allowed to write an external halting algorithm to work with a closed system as input? Just because the proof wouldn't work doesn't imply such an algorithm cannot be constructed. This is very similar to asking a human to determine whether or not the source code of a completed program on paper will halt. If the program contains a call to "Halts(self)", without defining that function, then it's clearly not complete and will not compile. If it is defined, and the program contradicts it, the human can still see the lie and factually determine that the program halts or does not.
Now, it might be possible to create an algorithm to emulate the human, producing the same results.
What if we were to take this algorithm and compile it into the program, to contradicts itself. I expect you'll claim this is proof the algorithm doesn't work. However, it's only proof that it doesn't work inside an enclosed system. It doesn't prove that either the human or halt algorithm cannot analyze this system externally and get the right answer. —Preceding unsigned comment added by 24.45.237.41 (talk) 03:59, 6 November 2010 (UTC)
...
I'm looking further into it and I'm seeing more problems with the proof even when the system is allowed to feedback upon itself. Realistically speaking, Halts might be implemented as as a separate program as follows:
Program Halts // black box
// arguments: Program, Input
// prints out 0 if loops, 1 if halts, 2 on input error
Program Contrary
// arguments: none
var h = Exec("Halts Contrary ???") // store output of Halts into h
if h==1 then Loop else Halt
At first glance, Contrary takes no input, but that's not true. In fact it takes the output of Halts as input. Since Contrary's behavior is obviously dependent upon it's input, then Halts must be told what input to use to make it's determination. So, when we run our test, we have to tell it what input Contrary will receive upon executing Halts internally.
Halts Contrary 0 // prints 1 since Contrary Halted when it saw the input of 0
Halts Contrary 1 // prints 0 since Contrary Looped
Halts Contrary 2 // prints 1 since Contrary Halted
There's no contradiction, Halts worked correctly for each input to Contrary. It is not inherently wrong to ask the Halt algorithm to evaluate the Contrary program with various inputs that are different than the eventual output. By doing it this way, we got around the problem of Halts needing to evaluate it's own output. Someone might equate this to cheating, but considering that Exec can execute other commands with non-deterministic results (aka "ping"), then why shouldn't the subprogram's output be considered input to the program which is being analyzed? Any system call which is non deterministic, like time(), would necessarily have to be considered program input in order to determine whether the program halts or not. Calling Halts as a subprogram is no exception. —Preceding unsigned comment added by 24.45.237.41 (talk) 05:57, 6 November 2010 (UTC)
I don't follow the argument, but there is over 50 years of mathematical history in which the proof has been accepted. You would need to find a WP:RS for the prove being "invalid", before it could be mentioned in the article. (For what it's worth, I've used and rederived the proof in my professional career, but that's irrelevant to this discussion.) — Arthur Rubin (talk) 07:19, 6 November 2010 (UTC)
I don't have any sources. I came across the proof in college years ago. It always felt wrong that this proof, for a very specific case, was used as a proof that Halting problem could not exist for other cases. That's not a logical step in my opinion. In fact most of the cases that we would want to apply the halting problem to do not fall under the case which is allegedly disproved.
Incidentally, I do believe the conclusion is correct, but I'm still skeptical of the given proof. 24.45.237.41 (talk) 18:01, 6 November 2010 (UTC)

---

There are more than one version of "the halting proof". The original Turing's proof is not a "halting proof", it's a "circle" proof. I like it because it gives insight into what would actually happen if one were to build an "infinite-loop/circle-tester" program. The proof postulates a tester-machine with no input; the tester-machine has its disposal a magical procedure that can detect if a "number-in-test" (potentially the binary code of a valid Turing program) gets into a "circle" (infinite loop). Then one by one, in sequence, tester-machine creates the numbers N that it tests for 1) well-formedness, 2) looping or not looping. If the number-in-test N doesn't "loop" tester-machine runs the number-as-program and then it prints the Nth digit produced by that number-as-program. Afterwards, Tester-machine goes on to the next number-as-program to test. Ultimately, when it encounters its own number, it ends in an infinite loop (as observed by the commentator at the very top of this section); tester-machine never succeeds in printing its own Nth digit, i.e. it fails right at the diagonal. Turing commented that he created this proof to "give more insight" into what exactly really is going on, as opposed to the one-liner that he tossed out as the "obvious" proof. Note that Turing's is not quite the same proof as the Stephen Kleene/Martin Davis "halting" proof. Marvin Minsky also derived a version that is slightly different. The fact that there are multiple versions of "the halting proof" should cause the skeptic to be skeptical of their skepticism. Bill Wvbailey (talk) 14:17, 6 November 2010 (UTC)

Yes, of course your going on to re-describe what's in the wiki. However I provided two arguments above to counter the proof. I am not familiar with Kleene/Davis (I will investigate), however you haven't said anything that would counter my arguments. 7e305a
1) Prove that it is not possible to construct a halting algorithm in an external system to determine whether another system terminates. I'm sure there's a better way to say it, but I don't want to repeat what I've already said.
2) You say it's possible to build a "loop tester machine" with no input. However I've pointed out above why this is flawed. Consider:
Program XYZ
// arguments: none
if (time() % 2) Loop() else Halt();
Now, when "Halts XYZ" runs, there is no consistent answer Halts could return, but why not? After all, computer code is strictly deterministic. The answer is: the input changes. Program XYZ has non-deterministic input "time()". Now we could stop there saying Halts cannot produce an answer for programs containing non-deterministic input. Alternatively we could make that input explicit by asking our Halts program "what would XYZ's behavior be when it's input from time() is 744" and Halts would comply by consistently returning true/halt.
Nothing should be controversial here when using time() as input. Why can't we do the same thing with program Contrary's input from subprogram Halts? This is what I did above, where I ask "What would Contrary's behavior be when it's input from Halts is 0"?
When the sources of non-determinism are pinned down to actual values, the apparent contradiction for Contrary required by the proof goes away.
So, the fact that Halt cannot work with non-deterministic programs does not prove that it cannot exist for deterministic ones, where all sources of input are pinned down. —Preceding unsigned comment added by 24.45.237.41 (talk) 16:36, 6 November 2010 (UTC)
---
It seems I had Alan's proof mixed up with Kleene/Davis. Still, the circle proof appears to have the same faults in that it doesn't work with an external algorithm in a black box. Secondly it requires a version of the cycle-test-machine which lacks input for non-deterministic calls, I've shown above how the non-deterministic issue can be solved. Thirdly, if non-determinism must remain, why does the proof assume that an algorithm cannot be written to detect the self contradicting conditions? For programs which would change their looping behavior depending on the result of Halt/cycle-test-machine, it's perfectly acceptable for such a machine to output "indeterminate" and be done with it. No program could exist to contradict this third state. —Preceding unsigned comment added by 24.45.237.41 (talk) 20:32, 6 November 2010 (UTC)

Here's the Minsky "unsolvability of the halting problem" proof (cf pages 148-149 in Marvin Minsky 1967 Computation: Finite and Infinite Machines, Prentice-Hall Ince, NJ, no ISBN (LCCCN: 67-12342)). This one is so simple that it's hard to imagine a rebuttal. At the outset we will have to agree that time "flows" in one direction, and we will have to agree that every "time-sequential discrete-state bounded-program machine" (hereafter TSDSBPM) does in fact operate in time-sequence -- meaning: given every instruction excepting the first and the HALT there is a "before" and an "after" instruction. Given that, it turns out that any composite machine made up of two or more of the above TSDSBPMs can be simulated by a single black-box equivalent TSDSBPMC. Furthermore any said TSDSBPMC can be represented by an integer encoded on a classical Turing machine (TM). (We can prove the first of these statements -- see Taylor Booth 1967 Sequential Machines and Automata Theory, John Wiley and Sons Inc, NY, no ISBN (LCCCN: 67-25924) text; the second is the Church-Turing Thesis which you'll have to accept to continue with this argument). At this point in the argument you have a choice of designing your tester Turing machine to "go outside the boundry" to ask a question of a (hypothesized) infallible ORACLE (the ultimate watchdog) that can answer only YES or NO the question the TM poses. Or you can hypothesize a "magical procedure" that is contained as code on your tester Turing machine. We now put the question to our tester TM with ORACLE or "magical procedure": Given this integer N, interpret it as the program of a TM. Will it HALT? Answer YES or NO -- and after your decision HALT in either case. So we put the number on our TM's tape (or equip our TM with a read-only input tape and put the number there), push the START button and observe whether or not our TM answers YES or NO and then HALTs (we might give our machine two output lights). Now we confront our ORACLE and/or our magical procedure with a really devious new tester machine TM*. We modify our tester TM so that it HALTs when the ORACLE and/or magical procedure says it does not HALT and vice versa: when the ORACLE and/or magical procedure says it will HALT the TM goes into a "circle" with no escape. Now here's the nut: we submit our TM*'s number to TM*. The ORACLE and/or the magic procedure can never answer correctly: it says HALT and then (here's where the time-sequence stipulation comes into the argument) the TM doesn't HALT, or it says NOT-HALT and then the TM HALTs. Either way the infallible ORACLE and/or magical procedure will fail to be infallible. Minsky writes: "This cannot be, so we sadly conclude that such a machine ... [he calls the ORACLE or the magic procedure his machine D for "decision" ] could not exist in the first place. We have only the deepest sympathy for those readers who have not encountered this type of simple yet mind-boggling argument before. It resembles the argument in "Russell's paradox" which forces us to discard arguments concerning "the class of all classes" -- a notion whose absurdity is not otherwise particularly evident. It is also related to the argument of Cantor, that one cannot "count" the real numbers [he's referring here to the continuum between 0 and 1; this is the basis of Chaitin's "omega" and Chaitin's halting argument]. In fact the unsolvability of the halting problem is usually demonstrated by a form of Cantor's "diagonal method". We prefer the present form because it suppresses the "enumeration" required for the diagonal method" (Minsky 1967:149). I hope Minsky's argument is convincing. It helps to draw it -- keep your drawings (and your thinking) as simple as possible. Or see Minsky's drawings (they're about as simple as you can get). Bill Wvbailey (talk) 14:09, 7 November 2010 (UTC)

---

Thank you for this discussion. I do believe I understand the argument, my point is that I don't agree with it.
"T button and observe whether or not our TM answers YES or NO and then HALTs...Now we confront our ORACLE and/or our magical procedure with a really devious new tester machine TM*...so that it HALTs when the ORACLE and/or magical procedure says it does not HALT and vice versa"
By doing so you've just combined the machine being tested with the machine doing the testing into one system. I'm pretty sure Godel's incompleteness theorem applies to this test because any set of logic within one system cannot possibly be used to prove itself. Therefor it is incomplete and contradictory. Even if the halting oracle were true, it just cannot be used to prove it's own truth in a single system the way the halting problem is devised. If an oracle exists, it could only solve the halting problem for external systems. I haven't found any literature which addresses this.
By the way, the halting oracle is not the only function with determinism issues. Take any function which depends on it's own output without making progress.
F(x)=!F(x) // x is T or F
Any value it returns is wrong by definition.
Or, this is a neat one:
F(x) = F(x) && !F(x)
Since x && !x must equal False, the answer would seem to be False, however this function never halts.
Now the big question is, does the fact that programs can contain these logical nuances imply that such programs are impossible to analyze logically? I believe that we can analyze them. It just so happens that "does not return" outcome is rarely anticipated since it is not stated explicitly. The same could be said of the Halting problem. Instead of concluding that the Halting oracle doesn't exist, it might be equally logical to conclude that there are in fact three possibilities. I've never read any literature covering this view either. —Preceding unsigned comment added by 24.45.237.41 (talk) 00:12, 8 November 2010 (UTC)

---

When I first saw Minsky's proof way back in 1986 (I pasted the book's receipt on the inside cover!) I was absolutely confused by the same issue you've brought up -- that the tester machine has a "trick" embedded in it, and we're using this tricky-tester to test its tricky-self instead of a legitimate tester. (This BTW is one reason why I like Turing's proof, I can see exactly where the failure happens. It's also why I'm beginning to like Chaitin's argument that Booth uses -- see more below.) Here's how I rebutt my own doubts -- I have to keep in mind that the matter of concern isn't the existence of a tricky TM*, but rather the existence or non-existence of our ORACLE: 1) Whereas ORACLE is any oracle we can devise, and it is infallible (but unlike God restricted to time- and space-sequence), 2) there exists at least one machine TM* that even ORACLE cannot render a decision on, i.e. when confronted with TM* the question is "undecidable", and this contradiction renders ORACLE non-existent. Remember: ORACLE can only render YES-NO decisions, no UNKNOWN, no UNDECIDABLE (this restriction has to do with something I found in Booth about the YES-NO, 1-0 nature of the characteristic function: two predicates are equal if their characteristic functions are equal (Booth 1967:383)).

It would seem this situation is susceptible to heuristics -- we can design a machine such as TM that will make sure that it detects TM*s. But there will be an infinite number of them, and this violates the restriction to a finite program/algorithm. This is related to the business of there being more decimals between 0 and 1 than there are possible Turing machine algorithms to generate them-- whereas the algorithms must be finite most decimals would require an infinite lookup to specify them.

I found yet another undecidability proof that blames the the problem on the potentially-infinite value of the mu-operator: Booth begins with the set τ of all Turing machines that can compute total functions f(x) = y. From this set we select all n-state machines,

"and we let fn,i(x) correspond to the total function by the ith machine of this set. Using this function we can define a new function:
Fn(x) =maxover i[fn,i(x)] = fn,i-max
corresponding to the maximum value that any fn,i(x) will have for a given value of x....

Fn(x) is a total function and it will HALT. Then Booth goes on to demonstrate that

Fn(x) < Fn+k(x), k > 1.
"These results can be used to define the function
G(x) = Fx(x)
"This function which gives the maximum number that can be computed by an x-state machine if x is the number represented by the initial tape expression, is a total function, but it is not computable" (Booth 1967:383).

And he proves this. Chaitin explains it by invoking TMs that attempt to generate arbitrary decimals that are so random they cannot be computed by algorithm excepting as a lookup table, and in this case the TM's "states" (program) is nothing more than a lookup table, and it will always be "bigger" than the number itself.

In a nutshell any program with an active mu-operator will be susceptible to not halting, and we cannot determine whether or not the mu-operator will ever be satisfied. Here Booth invokes the mu-operator miny:

"To show that the decision problem being discussed ... is unsolvable all we have to do is find one Turing machine for which it is impossible to tell if the machine will or will not stop. Consider the machine that computes the function:
ψ(x) = miny Tx(x, y)
"That is, ψ(x) is the minimum value of y for which the predicate Tx(x,y) is true. ψ(x) is a partically computable function, and its domain of definition corresponds to those values of x for which Vx Tx(x, y) is true [V here is the logical OR]
"Now if the the decision problem for the machine that computes ψ(x) were solvable, this would mean that Vx Tx(x, y) would be a computable predicate. However we know that Vx Tx(x, y)is a semicomputable predicate but not a computable predicate [that is, there exists y for which the Turing machine will run forever]. Therefore we can conclude that the halting problem for this machine is unsolvable. It should be noted that this result does not apply to all Turing machines because it is very easy to construct a Turing machine that has a solvable halting problem. What the example shows is that Turing machiens that have unsolvable halting problems do exist" (Booth 1967:389).

You can find the details in Chaitin's recent, popular Omega! book (can't find my cc or I'd give you the full reference). Again it's all related to this business of fewer algorithms than decimals which Cantor proved by the diagonal argument. (We're just lucky that there are algorithms for finding the decimal parts of pi and e.) Bill Wvbailey (talk) 15:39, 8 November 2010 (UTC)

---
I don't disagree with your example here. Surely there are an infinite number of problems which show the same undecidable behavior. The goal is to find the simplest of such problems and understand it.
Regarding your first paragraph: I can't find an electronic version of the Booth's book. Can you elaborate why a third outcome must be disallowed? That assumption bothers me.
Consider a simple game of rock paper scissors. Each side chooses some pseudo random number algorithm to generate a sequence of items. Assuming the sequences are not identical, one of the players wins. Now assume there is an oracle which somehow cheats by peaking at the other player's algorithm. You modify your algorithm to incorporate the oracle so that you output the winning hand for every round. With this algorithm, we can never loose, and we will never tie. So, should we assume that logically we'll always win? Can you tell where I'm headed? What if the other player copies your algorithm? As it tries to pick a rock/paper/scissors value, it sees your algorithm adapt, so it keeps going recursively without solution.
Now, it's trivial to see that not rock, nor paper, nor scissors can win the game. So, if we cannot win, we cannot tie, and we cannot loose, how do we explain the contradiction? It would be wrong to conclude from these facts that a cheating Oracle is impossible, since physically two players could in fact peak at the other's algorithm. All we've really proven is that an oracle can not converge when used on itself. This reveals a new unanticipated outcome to the game, which is the possibility for both players to be using an oracle such that they cannot win, tie, or loose.
Everything in the game could be implemented, the oracles could be implemented, and the results would be real. By understanding how non-determinism affects this actual game, we can gain insight into how it affects hypothetical problems like the halting problem.
--- 24.45.237.41 (talk) 18:40, 8 November 2010 (UTC)
I created a javascript program to illustrate the game above (more or less).
http://geedmedia.com/docs/rock_paper_scissors.html
24.45.237.41 (talk) 23:59, 8 November 2010 (UTC)

---

With regards to infallible oracles or decider-programs having other than YES-NO or 1-0 outputs, I think you'll agree that in this universe of ours it's ultimately true that either the Turing machine halts, OR it does not halt; stated intuitionistically: It's not the case that the Turing machine both halts AND not-halts (simultaneously in space-time). Ultimately there's no "middle" possibility, but while the mu-operator is cranking along there is a third possibilty u for "unknown". So an "oracle" or "decider" with a third possible output "u" is not an oracle or decider, it is just a Turing machine that produces a "u" output(as in "unknown-at-this-time") while its mu-operator is running. In other words there'd be three lights on its panel: { halts, doesn't halt, am-still-running-and-trying-determine }. See §64 "The 3-valued logic" in Kleene 1952:332ff -- he looks at this from both the classical Law of excluded middle and the intuitionistically-acceptable:

"...for the definitions of partial recursive operations, t, f, u must be susceptible of another meaning besides (i) 'true', 'false', 'undefined', namely (ii) 'true', 'false', 'unknown (or value immaterial)'. Here 'unknown' is a category into which we can regard any proposition as falling, whose value we either do not know or chose for the moment to disregard; and it does not then exclude the other two possibilities 'true' and 'false'." (p. 335).

This is a difficult section in Kleene (over my head; he goes on to prove a bunch of theorems having to do with partial recursion). The mu-operator is the fly in the ointment: if and when it "gets satisfied" the 3-valued truth tables Kleene uses suddenly change to two-valued, etc etc. Bill Wvbailey (talk) 15:17, 9 November 2010 (UTC)

"With regards to infallible oracles or decider-programs having other than YES-NO or 1-0 outputs, I think you'll agree that in this universe of ours it's ultimately true that either the Turing machine halts, OR it does not halt."
Ok, but that's only one interpretation for the contradiction in the proof. Since the halting algorithm is only hypothetical, it's rather easy to conclude that it proves there's no halting algorithm. However, is it not possible that the proof invalidates the wrong assumption?
I've tried to provide another problem (rock-paper-scissors), where the oracle/algorithm is in fact not hypothetical and can be actually implemented. However it appears to have a contradiction very similar to the halting problem. Since we know the rock-paper-scissors oracle does exist, we are lead to a different conclusion. In particular, I suggested that there are more outcomes than our intuition initially came up with. After giving it some thought, it becomes clearer why the two cheaters can produce a new game outcome.
Under the same reasoning, it might shed new light into the Halting problem. The oracle may still be possible if we just change the assumptions. —Preceding unsigned comment added by 24.45.237.41 (talk) 01:24, 10 November 2010 (UTC)
This discussion is inappropriate here. Talk pages are for discussing improvements to the article, not the subject matter in general. I did make a small comment early in the discussion, which perhaps I shouldn't have done, but at that point it was still possible it might have cleared up a misconception and ended the exchange. That no longer seems possible, so please take it somewhere else, as it interferes with the intended use of the page. --Trovatore (talk) 01:39, 10 November 2010 (UTC)
If we're not permitted to discuss the subject matter, then I will cease. It is disappointing though that there is an alternative interpretation of the halting problem proof which isn't represented in the article. —Preceding unsigned comment added by 24.45.237.41 (talk) 02:26, 10 November 2010 (UTC)

---

While dualing oracles has drifted off-topic, Unsigned ends with a very good point (that took them a long time to get to, but so what? Doesn't dialog help lubricate the wheels of progress?), and one that was taken up by r.e.s.'s recent entry re busy beaver proof (presumably stimulated by the above dialog). His suggested proof appears to be the same as the Booth-Chaitin version. I believe that alternate proofs would be of service to mystified/sketical readers so they can see alternate points of view, and r.e.s.'s suggestion, broaded away from Busy Beavers toward Chaitin and Booth's version would be a place to start. BillWvbailey (talk) 02:59, 10 November 2010 (UTC)

Artificial Intelligence and the Halting Problem

I'm surprised that there isn't even a paragraph on the use of the Halting Problem in the philosophy of AI to discriminate (rightly or wrongly) between what computers and humans can/can't do. —Preceding unsigned comment added by 121.45.215.163 (talk) 11:05, 16 July 2009 (UTC)

If there is a reliable, mainstream source that discusses it, then by all means put it into the article. But since neither computers nor humans can solve the halting problem, it doesn't appear initially that the halting problem would separate the two. — Carl (CBM · talk) 11:13, 16 July 2009 (UTC)
Afraid I don't know the precise reference - but I'm pretty sure that Roger Penrose covered it in The Emperor's New Mind - actually, WP discusses it in his article - http://en.wikipedia.org/wiki/Roger_Penrose#Physics_and_consciousness - but doesn't give the reference either. —Preceding unsigned comment added by 121.45.215.163 (talk) 12:33, 16 July 2009 (UTC)
We could probably add a sentence pointing to Roger Penrose, but it's important to remember that his theories about mathematical philosophy are somewhat idiosyncratic and don't have broad consensus in the field. So Penrose's opinions need to be attributed directly to him when they are presented. — Carl (CBM · talk) 12:37, 16 July 2009 (UTC)
There has been extensive discussion of the issue in the mainstream philosophical and cognitive-science literature. I believe that Dennett addresses the matter in Kinds of Minds, for example, and does so not as a new issue but as an oh-well-I-suppose-I-have-to-discuss-this-old-chestnut thing. It is my impression that Gödel's incompleteness theorem is more often invoked in this context than are any undecidability results. The section Mechanism_(philosophy)#Gödelian_arguments gives a number of references, some of which probably discuss undecidability results, at least in passing. —Dominus (talk) 13:06, 16 July 2009 (UTC)

Using randomness to solve halting problem

We know for a fact that a deterministic machine cannot solve the halting problem in all cases, but can a probabilistic machine (such as a quantum computer) solve it? I found a paper that states that one can:

http://arxiv.org/pdf/quant-ph/0512248 (If this link goes dead, please update it or at least e-mail me a nasty letter. I hate dead links!)

Joeyadams (talk) 04:34, 30 July 2009 (UTC)

Well, I suppose there are various things you could mean by that, but in the most obvious interpretation, no, it's not possible. The cone of Turing degrees that compute the halting problem has Lebesgue measure zero. (Actually if I remember right, all nontrivial Turing cones are Lebesgue null.)
So suppose you had some program with a slot for a true-random-number generator, and for any input Turing machine, this program would tell you, correctly with probability 1, whether the TM halts or not. Then intersecting the sets of streams from the RNG that work for each particular TM, of which there are only countably many, you get that for almost all random streams, this program witnesses that the halting problem is Turing reducible to that stream. But this is a contradiction. --Trovatore (talk) 07:11, 30 July 2009 (UTC)
Yes, you remember right, for any noncomputable real X the set of reals that compute X is measure 0. This is due to Sacks in the 1960s. — Carl (CBM · talk) 13:14, 30 July 2009 (UTC)
Re Joeyadams: You have to be extremely careful with terminology. First, the thing that is ordinarily called quantum computing is not probabilistic: it is completely deterministic. However, apparently Kieu is not using the standard definition of quantum computing, he is using his own idea for a hypercomputation system and calling it quantum computing. Such misuse of terminology is unfortunately common among the hypercomputation community: they often make up new definitions for standard terms, making it easy to get the wrong impression from scanning their abstracts.
Second, the paper you cited explains that Kieu's system cannot work. Not only does it depend on a false theorem about physics, but even if this could be fixed, the method does not guarantee that the output answer is correct, and so does not actually decide anything. We do list it on our article on hypercomputation, though. — Carl (CBM · talk) 13:03, 30 July 2009 (UTC)
It is widely believed and current work indicates that quantum computers do not alter the notion of computability, and would not solve the halting problem. The deterministic/probabilistic distinction in the first post doesn't impact on the halting problem. There are people working on unphysical models of computation that could solve the problem, but being unphysical they're not very useful practically (the hypercomputation mentioned above). There are merely of theoretical interest. Verbal chat 14:13, 25 September 2009 (UTC)

Modern Proof

Since the original proofs of this involve useless terminology from recursive function theory, and since the proof is trivial, it is a shame to use the terminology. Previous discussions of this subject have been hampered by a lack of consensus about what constitutes original research in scientific articles. I hope the discussion of this can follow the WP:ESCA guidelines.Likebox (talk) 00:33, 15 October 2009 (UTC)

Why isn't recursive function theory "modern"? It's a shame to introduce unusual terminalogy, such as Quineing, to replace perfectly straightforward recursive function theory. — Arthur Rubin (talk) 00:48, 15 October 2009 (UTC)
You are not adressing the input issue. Nobody nowadays states the halting problem the way it is stated here. In particular, the input is a complete red-herring.Likebox (talk) 00:50, 15 October 2009 (UTC)

Input nonsense

The presentation of the halting problem is unfortunate, because it pretends that the input to the program is important in some way. The problem of determining which programs halt with 0 input is just as undecidable as the problem of determining which programs halt with arbitrary input. The only reason the input is mentioned at all is because historically, Turing proved the theorem by putting the program code on the input stream. That's not necessary at all, the program can include its own code internally.Likebox (talk) 00:39, 15 October 2009 (UTC)

This is exactly right. There is no need to talk about 'input' at all. Input can be considered part of the program. Program A on input I can be thought of as program B just as program A on input I' can be thought of as program B'.
That's true, in a sense. But your methodology seems to be to replace simple diagonalization by complex Quining, which doesn't strike me as productive. — Arthur Rubin (talk) 00:50, 15 October 2009 (UTC)
I see. But it's a question: quining can be proved by diagonalization, and diagonalization is equivalent to quining. It's a question of taste. I think that quining makes the proof more self evident, because it doesn't involve the input stream.
I saw someone get awfully confused about proving the following theorem: It is undecidable to show that P halts in polynomial time with arbitrary input. The reason is that the Turing method passes the code on the input stream. It's an awful presentation nowadays.Likebox (talk) 00:52, 15 October 2009 (UTC)
Diagonalization is simpler; even in this context where some coding is required. The term "Quining" is obscure, even with respect to what you call "modern" methodology. In fact, in your "modern proof" section, you note that "quining" is necessarily the correct term. Why not use a method which is correct?
Even if the absence of input is the "correct", "modern" formulation, "Machine R with input T" can be coded as
  1. Write "T"
  2. Execute "R"
This is clearly computationally equivalent, so that, if the proof is simpler in the input formulation, it can easily be translated. — Arthur Rubin (talk) 01:02, 15 October 2009 (UTC)


Indeed, if you formalize what you just said, you prove that quines can be written. It's a question of whether the self-reference is acheived by diagonalization, or by the quining result, which makes self-reference obvious. It's a matter of taste, and I think that both presentations can be placed side-by-side.Likebox (talk) 01:12, 15 October 2009 (UTC)
Only if modern formulations use the word "quine", and if you can formalize the concept (not the same as "quine") of a program R' which writes the code of "R" and then executes "R" on that input. That's not the same as a "quine", which seems to be a program R which writes its own code. — Arthur Rubin (talk) 01:34, 15 October 2009 (UTC)
We seem no longer to have an article on the concept formerly known as quining; indirect self-reference doesn't seem to actually use the word, other than as a natural language example. It's no longer in the text. The article quine (computing) doesn't really seem applicable, unless modified to include a code of the Turing machine program as source code. — Arthur Rubin (talk) 01:39, 15 October 2009 (UTC)
quine (computing) is Ok. The method of writing quines makes it obvious that a program can print its own code in a subroutine, and then go on to do something else. There is no issue with this particular idea. Modern formulations use the word "fixed point theorem" which is essentially a synonym for quine.Likebox (talk) 01:43, 15 October 2009 (UTC)

Reverting the changes to the intro

If you don't like quining, discuss that. The old intro is ridiculous, because the halting problem is never stated about programs with inputs nowadays. It is always stated about programs without inputs.Likebox (talk) 00:56, 15 October 2009 (UTC)

No, I don't like quining, and I dispute your statement that "the halting problem is never stated about programs with inputs nowadays". Furthermore, even if it were, diagonalization is simpler than quining, and better understood — at least, by everyone other than yourself with whom I've interacted. Please stop adding nonsense to the lede. — Arthur Rubin (talk) 01:02, 15 October 2009 (UTC)
If you think that, just get rid of the quining and we'll discuss it. But please don't revert the intro--- the intro is stating the halting problem, which is stated without inputs. You can keep the intro without inputs, and add a sentence saying "Turing's original formulation was for programs with inputs, so determining whether a program P with input I halts is undecidable". But the modern statement of the problem definitely does not involve inputs in any way, and it is slightly misleading to present it with inputs. The only purpose of doing it this way is to avoid quining.Likebox (talk) 01:07, 15 October 2009 (UTC)
Also, in my experience, everyone I have ever met understands quining better than diagonalization, although in this case both are sufficiently trivial and sufficiently obviously equivalent that they are indistinguishable.Likebox (talk) 01:13, 15 October 2009 (UTC)
It looks like a whole section is also being reverted by Arthur Rubin titled "Modern Proof of Undecidability". It's time for a third party to look into this issue. Varks Spira (talk) 01:15, 15 October 2009 (UTC)
I just added that section--- Arthur Rubin is not reverting any long-standing material. This dispute has happened before, and honesty compels me to admit that consensus swung against "modern proof". I was hoping that the issue can be looked at again, as a test-case for the proposed WP:ESCA guidelines. These allow streamlined modernized proofs in scientific or mathematical articles, in cases where the proofs are correct, and are only designed to illuminate intermediate steps in well-understood sourced arguments.Likebox (talk) 01:19, 15 October 2009 (UTC)
How do the reliable sources state it? We should follow the sources, rather than debate the best way to present it. Finell (Talk) 01:22, 15 October 2009 (UTC)
This is why I brought up WP:ESCA. There are sources, and there are sources. The result is so old by now, that sources discuss it in ten-thousand different ways. In such cases, using verbatim quotes from sources can be misleading at times.
Current textbooks do usually state the halting problem with the input. But they also later prove that the halting problem is equally undecidable without the input. Both theorems are so closely related, that they are both called "the halting problem", but since the no-input result is infinitesimally stronger (whatever that means in this case), the no-input theorem is what people use.
The usual path in mathematics textbooks is to prove the halting problem with input by diagonalization, then to pass to the no-input case by the Kleene fixed-point theorem. The Kleene fixed point theorem is equivalent to the existence of quines, so I just wrote it in this language. According to WP:ESCA, which I agree with, it is not a good idea to only debate in terms of the language in sources, because then you can end up with an illegible mess. Instead, you should consider clarity of explanation and correctness when talking about objective material that fills in intermediate steps in proofs.
But even with WP:ESCA, it's a judgement call. So take the time to think about it, and see how the new material feels.Likebox (talk) 01:29, 15 October 2009 (UTC)

See WT:MATH

See WT:MATH#Halting problem and Likebox. Your input is welcome there, but I doubt the quined proof would be. — Arthur Rubin (talk) 01:18, 15 October 2009 (UTC)

Previous Likebox discussions

Can be found at Talk:Halting problem/Archive3#Formal statement redux, Talk:Halting problem/Archive3#What Is A Rigorous Proof? (November 2007), and Talk:Halting problem/Archive3#Likebox edits (March 2008). I see no indication that anyone agreed with Likebox then, nor do I see any likelyhood that other editors will do so now. — Arthur Rubin (talk) 01:28, 15 October 2009 (UTC)

Things have changed since then, and we have different ideas about how scientific and mathematical articles should be written. To give a good test of the proposed guidelines in WP:ESCA, I am bringing this up again.Likebox (talk) 01:31, 15 October 2009 (UTC)
WP:CONSENSUS can change, but you have provided no evidence that it has, nor that WP:ESCA (even if a guideline, which it appears not to be; and, in fact, you are the only one in support in the poll) would provide a different result. — Arthur Rubin (talk) 01:54, 15 October 2009 (UTC)
I am testing the waters. I am not the only supporter of WP:ESCA, there are also the authors of the proposed guideline. It is important that we get this trivial stuff straight, so that mathematical and scientific articles can be written without the fear of deletion.Likebox (talk) 02:08, 15 October 2009 (UTC)
You have tested the waters before... Regardless what WP:ESCA says, we write all of our articles in a way that agrees with the presentations in the mainstream literature, rather than inventing our own organizational frameworks. — Carl (CBM · talk) 02:29, 15 October 2009 (UTC)
And I will test the waters again, until this is accepted. It is not reasonable that material which is clear and correct, old and well-established, can be removed just because some editors think it sounds too quirky.Likebox (talk) 20:02, 15 October 2009 (UTC)
It isn't "clear" (notation), old, or (well-)established. Unless you can provide reliable sources which use the term "quining" for the standard diagonal argument, it shouldn't be in Wikipedia. I think it's time for a user RfC; Likebox is saying he intends to violate Wikipedia guidelines in order to get his material in place. — Arthur Rubin (talk) 20:22, 15 October 2009 (UTC)

(deindent) You don't need a formal RfC--- I have made an attempt, and until you decide to stop getting in the way, then this attempt will fail. The proof of the Halting problem is not made that much simpler by Quines, the change is only slight and superficial. But the Quine method is easy to internalize, and allows simple self-contained demonstrations of other results in logic which have been historically more difficult to understand.

I will try again when there are new people here. I intend to continue to try to incorporate the material, because there are still many recursion theorists whose work has been badly misinterpreted and will never see the light of day until these types of expositions are not censored.Likebox (talk) 21:19, 15 October 2009 (UTC)

Perhaps a formal RfC would be helpful. It might be better if you had a formal request to get consensus before adding your quirky material to Wikipeida articles. If you'll agree to that, under penalty of block, then I don't see a need for a formal RfC. After all, you've done this at least three times, getting no support from any other editor, as far as I can tell.
It might be helpful, anyway. There's another editor starting with L to whom I am trying explain the concept of "consensus", and he doesn't seem to get it. If you interpret "consensus" the same way he does, you would keep adding the material until two editors reverted you. — Arthur Rubin (talk) 21:35, 15 October 2009 (UTC)
"Lack of consensus" in this case has meant that you and CBM oppose the additions. The Godel's theorem additions were stable for months a few years ago, and once you and CBM are outnumbered, then the additions will be ok again.Likebox (talk) 22:53, 15 October 2009 (UTC)

Likebox, you said above "The proof of the Halting problem is not made that much simpler by Quines,..." As I keep pointing out, there is no Quine at all in your proof. At least in this version. What is in your proof is a fixed point obtained from Kleene's recursion theorem; this fixed point is not a Quine. However, the proof presented in the article already is more elementary, as it does not use the recursion theorem at all. So your proof adds additional complexity compared to the standard proof that was already presented, which relies only upon the universality of the programming language. — Carl (CBM · talk) 23:32, 15 October 2009 (UTC)

You say "fixed point" I say "Print your own code". The question of clarity is whether "fixed point" is a good way of saying "print your own code". They are logically equivalent, of course.
The problem is that "Fixed point" puts the code of the program on the input stream. This can be confusing for some people, and I have seen this happen in real life. The reason is that there is only one input stream for a Turing machine. You can code up multiple input streams, but that requires an additional layer of coding. If you just do things in the most simple-minded way, then the real input of the program is mangled together with the code of the program, because they both have to fit on the same stream.
This is a manifestation of the practical limitations of Turing machines. They are universal, of course, so this is only a practical limitation, but human beings have finite imagination. If you make the hardware sufficiently primitive and unforgiving, even the simplest code is a headache to understand.
To show you how this can be a problem, consider the following question: Is it undecidable whether a program P halts in polynomial time?
To prove that it is in fact undecidable, for contradiction assume that there is a code POLYNOMIAL which solves the problem, so that POLYNOMIAL(P) returns "polynomial time" whenever program P acting on input I runs in polynomial time. Now write SPITE to take input I and do the following:
  1. Print its code into a variable R
  2. Compute POLYNOMIAL(R)
  3. If the answer is POLYNOMIAL, run some exponential time algorithm on I, otherwise run any well-known polynomial time algorithm on I.
The contradiction is obvious when written this way. The point is that POLYNOMIAL is running with "R" as input, while SPITE is running with "I" as input. So the input to POLYNOMIAL is the fixed size object R, independent of I.
On the other hand, if you try to use the fixed point theorem, you run into a stupid problem. The input I gets mangled with the code of SPITE when you use the fixed point theorem. So if the code "POLYNOMIAL" runs in non-polynomial time, when you fixed-point the thing with input I, you might end up running in non-polynomial time. The problem is that the fixed-point theorem doesn't separate the "code stream" from the "input stream" into two streams. You could easily reprove the fixed point theorem to do that, of course, but I have seen a logician get confused with this exact theorem in this exact way. The point is that the usual terminology is not sufficiently computer-science friendly to include simple ideas like multiple streams and separate variables in a natural way.
So by restating the theorem using "print your own code", the pedagogy is simplified considerably, especially when regarding less trivial proofs. This comes at the expense of introducing a not completely obvious idea--- that a computer program can print its own code. By writing things in terms of "fixed point theorem", the same idea is expressed in a not completely clear way, for reasons I have explained above.Likebox (talk) 00:05, 16 October 2009 (UTC)
My point was simply that there is no Quine whatsoever in the proof you had added here. You continue to use that term incorrectly to refer to fixed points in general, when a Quine is a very specific type of fixed point. — Carl (CBM · talk) 00:19, 16 October 2009 (UTC)
I did not use the word "QUINE" anywhere in the text above. I also did not use the word quine in the addition to this article, except as an aside, saying "a code which prints out its own code is called a Quine".
This dispute is not over the precise meaning of the word "Quine". This dispute is over whether you can describe the proofs of these theorems in common english, as opposed to jargon.Likebox (talk) 00:42, 16 October 2009 (UTC)
No, the dispute is as to:
  1. Whether "Quine" is used in the literature,
  2. Whether "write your own code" is meaningful in a proof (without a proof that it can be done), and whether that's different from the usual diagonalization
  3. Whether your so-called "common English" is sufficiently accurate to contain the essence of the proof.
Arthur Rubin (talk) 02:29, 16 October 2009 (UTC)
This is a question of background and experience. To construct a quine in a programming language is an exercise for computer science freshmen. Since I don't use the word quine, your obsession with it is ridiculous. If you want a proof that any program X in a random access machine with a variable R can print its code into this variable, it's simple. It's not different than the usual diagonalization in any way, except it is conceptually clearer for many people.Likebox (talk) 04:23, 16 October 2009 (UTC)
It may be "conceptually clearer", but it's more difficult to do correctly than most diagonalziation arguments. — Arthur Rubin (talk) 06:56, 16 October 2009 (UTC)
The proof in the article uses only extremely elementary properties of the model of computation (and lists the properties it uses). Your proof requires not only these things, but also previous knowledge about how to construct fixed points. So I don't see that your proof is actually simpler. As your proof uses fixed points for a result where simple diagonalization is all that is required, I don't see that your proof is conceptually clearer either. — Carl (CBM · talk) 10:45, 16 October 2009 (UTC)
I agree that it is slightly more involved, but this comes with a great improvement: any simple undecidability argument does not need to cause confusion with the input stream. The fixed point theorem proves that a program can print its own code, just as it proves that the Halting problem without input is identical to the halting problem with input. The essential point is that the conceptual content of the fixed point theorem is "print your own code". This is not stated clearly, and this stuff is confusing to a lot of people.
I did not remove any existing discussion, I only added text. I have no issues with the fixed point argument and the diagonalization, I did not challenge them or erase them. I just added material.Likebox (talk) 20:43, 16 October 2009 (UTC)

Mathematical problems as Halting problem.

I like to be added that certain well known mathematical problems can be expressed as Halting problem.

These are (among many others):

  • Goldbach conjecture
  • The existence of odd perfect numbers
  • Fermat Last Theorem
  • Four Color Problem

As opposed that certain problems can not be expressed as Halting problem, such as the existence of infinite prime twins. —Preceding unsigned comment added by Lkruijsw (talkcontribs) 16:06, 16 October 2009 (UTC)

I think that has to do with where the conjecture fits in the arithmetic hierarchy
Goldback conjecture is  
The non-existence of odd perfect numbers would be  
Fermat's Last Theorem is  
Four Color Theorem is   (the coloring is a bounded quantifier)
The existence of infinitely many twin primes would be  
Arthur Rubin (talk) 19:28, 16 October 2009 (UTC)
Your comment is not clear. Do you propose something?
The current article lacks any illustration as I suggested.
Also, the sequence of understanding is (for me at least):
  • Someone plays with certain problems, like I mentioned.
  • He realizes that they are Halting problems.
  • He realizes that not all of them are Halting problems.
  • He realizes something like 'arithmetic hierarchy'.
It is not the other way around. Starting with a arithmetic hierarchy, from
nowhere. And then one year later finding the examples and fitting it in this
classification.
So, from a didactically point of view, this sequence of human understanding
must be followed.
BTW, it is Goldbach,not Goldback.
Lkruijsw (talk) 20:51, 16 October 2009 (UTC)
I'm sorry, perhaps I wasn't clear — I was attempting to explain which problems can be reduced to a halting problem. The halting problem, itself is ("lightface")  , as it can be written: "There exists a (finite) computation (which can be coded as a single number) starting with the specified initial condition, which terminates." Perhaps the fact that any   problem can be coded as a specific halting problem, and hence any   problem can be coded as a specific "this computation does not halt" problem, should be noted in this article or in arithmetic hierarchy. — Arthur Rubin (talk) 10:01, 17 October 2009 (UTC)
I agree that the natural examples of complete sets at various levels of the arithmetical hierarchy should be mentioned in that article, and that some examples of natural   and   sets should be mentioned here. On the other hand, we have to be precise anytime we say "reduced" in an article, since the sets that are Turing reducible to the halting problem are exactly the   sets. — Carl (CBM · talk) 14:27, 28 October 2009 (UTC)

We say that the halting problem is undecidable over Turing machines.

you should add the following proposition that the halting problem is undecidable not only to TM but also to zeno machine and in general it seems to be an inherent limitation of any computational system


The halting problem is briefly discussed in a general context and the suggestion that it is an inevitable companion of any reasonable computational model is emphasised. It is suggested that claims to have “broken the Turing barrier” could be toned down and that the important and well-founded rôle of Turing computability in the mathematical sciences stands unchallenged.

* Petrus H. Potgieter, Zeno machines and hypercomputation, 2004, revised 2006, http://arxiv.org/abs/cs/0412022

—Preceding unsigned comment added by 138.250.193.98 (talkcontribs)

Despite what it says on arxiv, this has now been published. Here is the DOI: http://dx.doi.org/10.1016/j.tcs.2005.11.040. Verbal chat 09:25, 6 November 2009 (UTC)

yes i now it has been published. what are you trying to say? why this information has not been included in the halting problem? —Preceding unsigned comment added by 138.250.193.98 (talk) 09:33, 6 November 2009 (UTC)

It's a form of hypercomputation, I don't see why we should deviate from the standard description to include an unphysical model. Verbal chat 09:55, 6 November 2009 (UTC)

ok but i think it should state this fact as a general attribute of the halting problem or at least say that there is strong suggestion

The halting problem is briefly discussed in a general context and the suggestion that it is an inevitable companion of any reasonable computational model is emphasised.

this should be stated at the beginning 138.250.193.98 (talk) 10:03, 6 November 2009 (UTC)

Intuitive explanation and disproof

I commit to the following: If you give me a machine that will look into my mind and predict if I will kill myself in the next 1 minute or not, then I will use the machine on myself and it will say suicide or life. If it says suicide, I will not kill myself in the next 1 minute. If it says life, I will kill myself immediately. Therefore the machine will be less accurate about me than random guessing, therefore the machine is impossible, therefore I have risked nothing by committing my life to its answer, because it can never be created to answer 1 way or the other. I am a disproof of the "halting problem" and simultaneously a proof of "free will" because its impossible for any machine to completely predict all my actions, if I will choose to halt or not. BenRayfield (talk) 05:45, 6 January 2010 (UTC)

yeah, yeah. The whole point is that's this is not just true of you as a person, it's true of computer programs too. What you just said is the proof that the Halting problem is undecidable.
If you claim to have a program "HALT" which tells you whether a program halts or not, write a program "SPITE" to print its own code into a variable R, calculate what HALT says about R (meaning, what Halt predicts about SPITE itself), and then do the opposite.
So SPITE proves that HALT doesn't work, in the same way that you prove that there is no good halting predictor for your own actions. So if this is your definition of "free will" (and I agree that it is the correct one), then computer programs have the same type of free will as people do.Likebox (talk) 21:19, 6 January 2010 (UTC)
you assume the universe isn't deterministic, which i see no proof for. —Preceding unsigned comment added by 216.183.86.242 (talk) 20:15, 14 April 2010 (UTC)

Why the following two problems equivalent?

Problem 1: given a description of a program, decide whether the program will always finish running or will run forever with some input.

Problem 2: given a program and an input, whether the program will eventually halt when run with that input, or will run forever.

We usually refer to the second problem as the halting problem and prove it is unsolvable. However, how do we prove the first problem is also unsolvable? Why the following two problems equivalent, as mentioned in the first paragraph of the article?

Thanks. —Preceding unsigned comment added by Tmbu (talkcontribs) 09:24, 8 January 2010 (UTC)

The two problems are each one-one reducible to each other. Obviously problem 1 reduces to problem 2. To reduce problem 2 to problem 1, given an index e and an input i, use the s-m-n theorem to produce a different program k such that program k with no input executes the same code as program e with input i. Then apply the solution of problem 1 to k. — Carl (CBM · talk) 13:02, 8 January 2010 (UTC)
That's one way of putting it, in my opinion, an extremely unilluminating way. Here's another way of saying the same thing that might be clearer.
For contradiction, asssume that you are given a computer program "HALT(R)" that takes as input code R and correctly tells you whether or not it halts. consider the program SPITE which does the following
  1. prints its own code into the variable R
  2. calculates HALT(R) (figures out what HALT would say about R)
  3. if HALT returns "R halts", then SPITE goes into an infinite loop. If HALT returns "R doesn't halt", SPITE halts.
Since R is really the code for SPITE, SPITE is asking HALT "what do I do?" Then whatever the answer, SPITE does the opposite. This means that HALT cannot work correctly on SPITE.
This proof shows that any HALT doesn't work to predict whether a program will halt or not, regardless of input.
The essential difficulty in this proof is the innocuous looking step 1. To show that a program can print out its own code is a nontrivial task. You can do it explicitly in any modern programming language, but it requires a recursive trick. This trick was formalized by Kleene well after the halting problem was stated by Turing.
So how did Turing prove that halting is undecidable if he didn't know that a program can always be made to print out it's own code?
Turing used a clever trick, which is also found in Godel's 1931 incompleteness paper. What he said was: consider a program P which takes input I. Then he showed that there is no halting predictor which can predict whether a program P with input I halts.
He did it this way: assume you have a program HALT(P,I) which tells you whether P halts on input I, where the input is a code (or an integer, any code is also a large integer).
Consider now a program P which takes as input any code I, and evaluates HALT(I,I). Then given the result of HALT(I,I), it does the opposite: meaning if HALT(I,I) returns "halts", then P(I) runs forever, and if HALT(I,I) returns "doesn't halt", then P(I) halts.
Now consider P(P), the program P fed its own code as input. This program evaluates HALT(P,P) and does the opposite. But that's the same as checking whether it itself halts or not, and the proof is essentially equivalent to the SPITE business.
The way in which Turing avoided printing out your own code is by feeding in the required code as input to the program. In order to avoid doing this, you need to prove that any program can be made to print out its input, and this trivial nonsense is called the s-m-n theorem and recursion theorem in the obtuse infantile language of recursion theorists.Likebox (talk) 19:08, 10 January 2010 (UTC)
I hadn't realized you were so completely wrong. The s-m-n theorem is not necessary for the reduction. The reduction (in Likebox's notation) is the following program k:
  1. print i on the tape
  2. execute program e
Arthur Rubin (talk) 05:15, 9 November 2010 (UTC)

Open question whether humans can solve the halting problem?

The article states: It is also an open empirical question whether any such physical processes are involved in the working of the human brain, thus whether humans can solve the halting problem. That phrase should be removed. I don't think there has never EVER been any evidence (not even an informal experiment) that seemed to suggest that human brains are capable of hypercomputation, or that humans could solve the halting problem. Most humans can't understand simple programs, and I think that not even the best mental calculator ever could follow a Turing machine with > 100 states. --79.153.36.193 (talk) 22:25, 23 March 2010 (UTC)

It's been a while since I read it but as I recall Penrose's The Emporer's New Mind tackles this question in depth. Philosophers of mind think he's all wet since he offers no evidence for his conjectured quantum-mechanical mechanisms that he supposes operate in microtubules of cells. But he supposes that cells might be doing quantum-mechanically-based computations (or whatever), and there's no evidence to the contrary either. It's like believing in ghosts. We have no "evidence" that they exist, but we have no "evidence" that they don't exist, as well. So the sentence has to stay, because it is an open question in the philosophy of mind (not just Penrose, either. I can't name them right off because I'm far away from my books, but as I recall Colin McGinn tackles this in one of his books about "mysterianism"). Bill Wvbailey (talk) 00:39, 24 March 2010 (UTC)
As written, of course, it's just the wrong question. Given an arbitrary program of a trillion lines of C code, can a human being decide if it halts? Not in this lifetime anyway. This has nothing to do with whether the brain uses techniques that are "in principle" more powerful than a Turing machine — the human halting-decider simply won't have time.
You might say that that's a trivial objection that doesn't get at the heart of the matter, and I agree. The problem is that it's not so clear how to rephrase the question so that it does ask what we want it to.
But as Will says, there is indeed an active dispute about the question, even if it's not clear what the dispute actually means. That does indeed need to be acknowledged in the text. --Trovatore (talk) 00:47, 24 March 2010 (UTC)
I think it shouldn't be there in the article, except if you can find good academic sources discussing it. Halting problem is quite an academic topic, anyway, and as far as I can tell, the discussion about whether humans can solve it is nothing but abstract hand-waving by home-grown amateur philosophers who don't know very much about either the halting problem or the human brain. I might be wrong though. --SLi (talk) 15:26, 25 March 2010 (UTC)
I'm half a continent away from my books, so I can't offer up a quick "academic" quote. As Trovatore says, there's a problem with respect to what exactly is the question. With regards to incompleteness and related problems (e.g. undecidability), I know exactly where to find Goedel's opinion (see the full quote in Algorithm characterizations section about Goedel)-- that the Turing machine model which he finally landed on as the best choice for calculability/computability issues -- did not rule out the possibility that the capacity/capability of human reason exceeds that of a Turing machine (italics added):
"In a Postscriptum, dated 1964, to a paper presented to the Institute for Advanced Study in spring 1934, Gödel amplified his conviction that "formal systems" are those that can be mechanized: (cf p. 72 in Martin Davis ed. The Undecidable: "Postscriptum" to "On Undecidable Propositions of Formal Mathematical Systems" appearing on p. 39, loc. cit.)
"It has already been pointed out that, for every function of positive integers which is effectively calculable in the sense just defined, there exists an algorithm for the calculation of its value.

"Conversely it is true . . ." (p. 100, The Undecidable).

It would appear from this, and the following, that far as Gödel was concerned, the Turing machine was sufficient and the lambda calculus was "much less suitable." He goes on to make the point that, with regards to limitations on human reason, the jury is still out:
("Note that the question of whether there exist finite non-mechanical procedures ** not equivalent with any algorithm, has nothing whatsoever to do with the adequacy of the definition of "formal system" and of "mechanical procedure.") (p. 72, loc. cit.)
"(For theories and procedures in the more general sense indicated in footnote ** the situation may be different. Note that the results mentioned in the postcript do not establish any bounds for the powers of human reason, but rather for the potentialities of pure formalism in mathematics.) (p. 73 loc. cit.):::::Footnote **: "I.e., such as involve the use of abstract terms on the basis of their meaning. See my paper in Dial. 12(1958), p. 280." (this footnote appears on p. 72, loc. cit).
Questions around incompleteness and the halting problem (undecidability) are connected to the use of "formalism in mathematics" (as exemplified by Turing machines and their extensions). If this very precise restriction is lifted -- e.g. the use of analog systems with arbitrary precision, human minds with both analog and random characteristics -- the rules change and so (perhaps) do the conclusions. Bill Wvbailey (talk) 00:31, 26 March 2010 (UTC)
The microtubule hypothesis in The Emperor's New Mind never got any significant scientific acceptance. It's a tiny minority view at best. It's reasonable to mention it in the article about the Church-Turing thesis but it doesn't belong in the halting problem article. The Church-Turing thesis itself is already mentioned and linked instead. 69.228.170.24 (talk) 00:23, 27 May 2010 (UTC)

Relationship with Gödel's incompleteness theorem

Section says:

Now suppose we want to decide if the algorithm with representation a halts on input i. We know that this statement can be expressed with a first-order logic statement, say H(a, i).

That looks like a handwave to me. How do we know that the halting assertion can be expressed as a first-order statement about natural numbers? (Yes it can, but there is no proof given here, and it's a bit subtle). Is this a Likebox section? I didn't bother checking the history. Maybe it should be rewritten. 69.228.170.24 (talk) 13:42, 26 May 2010 (UTC)

That section has been in the article since about 2001. I think it's supposed to be a little vague; it's just a sketch. I added a link to the T predicate just now. — Carl (CBM · talk) 13:49, 26 May 2010 (UTC)
Thanks, I guess that fixes the immediate problem, though the linked article about the T-predicate could also use some expansion (it doesn't really explain what's going on). 69.228.170.24 (talk) 23:11, 26 May 2010 (UTC)
I expanded that article some. If you have suggestions for which parts are unclear, I can try to work on them. It is very hard for me to tell on my own. You can just put them on that article's talk page or tag particular sentences as unclear. — Carl (CBM · talk) 00:38, 27 May 2010 (UTC)
Wow, thanks! I left a comment there. 69.228.170.24 (talk) 07:13, 27 May 2010 (UTC)

importance and consequences

The paragraph

Since the negative answer to the halting problem shows that there are problems that cannot be solved by a Turing machine, the Church–Turing thesis limits what can be accomplished by any machine that implements effective methods. However, not all theoretically possible machines are subject to the Church–Turing thesis (e.g. oracle machines are not). It is an open empirical question whether there are actual deterministic physical processes that, in the long run, elude simulation by a Turing machine. It's also an open question whether any such process could usefully be harnessed in the form of a calculating machine (a hypercomputer) that could solve the halting problem for a Turing machine amongst other things. It is also an open empirical question whether any such physical processes are involved in the working of the human brain, thus whether humans can solve the halting problem.

is pretty lame (it understates the acceptance of the Church-Turing thesis). I may try to clean it up. 69.228.170.24 (talk) 00:33, 27 May 2010 (UTC)

I think it is just trying to pack a lot of information into a tiny amount of space. It may help to rework that whole "importance" section, and possible split out a subsection or two. — Carl (CBM · talk) 00:40, 27 May 2010 (UTC)

reference format

The format of the references needs to be standardized (i.e. last name, then first) and they need to be put in order. Bubba73 (You talkin' to me?), 22:46, 19 June 2010 (UTC)

Busy Beaver functions and the Halting Problem

Would the article benefit by some discussion of the relation between Busy Beaver functions and the Halting problem? In particular, would it be useful to include a proof that the blank tape halting problem is undecidable, by using the noncomputability of, say, the Busy Beaver function S? Note that such a proof avoids diagonalization, and could go something like the following:

If the blank tape halting problem were decidable, then there would exist a program that, for every n, simulates all and only those n-state Turing machines that halt when they're started on a blank tape, keeping track of the number of steps each one takes to halt. The program could thus compute, for each n, the maximum number of steps taken among these n-state machines; i.e., it could compute the Busy Beaver function S. However, this contradicts the fact* that S is not a computable function. Therefore, the halting problem is undecidable. QED

*A simple proof that S is not computable is sketched in the Busy Beaver article, and a somewhat simpler one goes as follows ("machine" here means a 2-symbol Turing machine that uses unary to represent input and output):

If S were computable, then the function S(2x) would be computable; i.e., there would be a machine that computes S(2x) for all x. This machine would have some positive number of states (say n states), which could be labelled 2n+1, 2n+2, ..., 2n+n. Furthermore, the transition table of this machine could be appended to any 2n-state transition table containing the tuples (1,0,1,R,2)(2,0,1,R,3)...(2n,0,1,R,2n+1). The result would be a transition table defining a 3n-state machine that would compute S(2·2n) on an initially blank tape, taking at least S(4n) steps to do so. Thus, if S were computable, then S(3n) ≥ S(4n), contradicting the evident fact that S is a strictly increasing function. Therefore, S is not computable. QED
r.e.s. (talk) 19:06, 8 November 2010 (UTC)

As a recursion theorist, I would say that proof does use diagonalization, it's just written differently. But I have no objection at all to adding a summary section to this article about the busy beaver function, in fact I think it's a good idea. — Carl (CBM · talk) 20:25, 8 November 2010 (UTC)
I'm considering what content would be appropriate in such a separate section. Regarding proof sketches, I'm pretty sure that essentially the above HP-undecidability proof, starting from the noncomputability of S, appears in several sources. However, the simplified S-noncomputability proof given above is my own, and the more-complicated one in the BB article appears to be unsourced. (As to whether a proof uses diagonalization, here's a remark by Radó in the paper that introduced the Busy Beaver functions: "It may be of interest to note that we shall not use an enumeration of computable functions to show that our examples are non-computable functions. Thus, in this sense we do not use the diagonal process." Wouldn't you agree that this is true of the above proof as well?)
r.e.s. (talk) 13:47, 11 November 2010 (UTC)
I don't think any of the proofs we are talking about is novel in any important way, but I agree with the idea that we should try to stick to published proofs instead of making up more complicated ones. In particular, I agree we could use something more like Radó's original proof style in the BB article.
I'm not sure what Radó had in mind in his introductory remarks, but he certainly does use an enumeration of all Turing machines, e.g. on page 879 in section IV. That is exactly what I would call an "enumeration of the computable functions". I think what Radó means is just that he avoids assigning a natural number to each Turing machine, but that's only a matter of presentation style. We could use the Turing machines themselves as the indices, so that the machine enumerate themselves.
Although it's a wonderful paper, this choice of presentation style does have its costs. The focus on Turing machines in the paper makes the paper more readable for novices, but obscures the underlying recursion theory. It's not immediately obvious from the paper alone what is actually being proved about computable functions (independent of the model of computation).
As for the diagonalization in the proof you wrote above, I see it in these phrases:
"This machine would have some positive number of states (say n states) ..."
"the transition table of this machine could be appended ..."
Of course you could decide you don't think that's a diagonalization, but I would consider it one. You are starting with the assumption that you have a computable function, and then you are breaking the seal on the black box and looking inside to see how it's programmed. The exact details of your proof require not only that the function is computable, they also require knowing how the function is computed. In this sense you are diagonalizing against all the possible ways to compute the function. — Carl (CBM · talk) 14:36, 11 November 2010 (UTC)
A main issue is the appropriate sourcing of whatever proofs are sketched in the article. Possibly the proof I sketched above (and the one in the BB article) is sufficiently straightforward and non-novel to call it a variation of something already published (and to cite that something when we find it).
Radó's proof is more complicated than these, due to its concern with rates of growth. The above proof, and to a lesser extent the one sketched in the BB article, achieves significant simplification by ignoring rates of growth altogether. In relation to the HP, this simplified non-rate-of-growth method seems better than Radó's, because it's only the noncomputability of S that matters in this context. I understand your point about the focus on Turing machines (or any other specific model of computation); however, since BB functions are model-specific by definition, if a TM-model is used (which is standard for BB functions) then I think the method of proof sketched above is preferred to Radó's.
r.e.s. (talk) 18:00, 11 November 2010 (UTC)
I looked through a few books on my shelf and it was surprisingly hard to find proofs of this. Cooper's undergrad book has a proof, but it's also in terms of rates of growth. I'll keep looking to see if I can find a nice proof just of the uncomputability. — Carl (CBM · talk) 21:17, 11 November 2010 (UTC)
By searching the web, I found that the old 1974 textbook by Boolos & Jeffrey (Computability and Logic) devotes several pages to essentially this non-rate-of-growth type of proof (in the chapter "Uncomputability via the busy beaver problem"); also, they refer to it as giving the result "more directly" than the "method of diagonalization". The 2007 edition by Boolos, Burgess and Jeffrey likewise devotes several pages (in the chaper "Uncomputability"). In both cases, undecidability of the blank-tape halting problem is proved by using a variant of the Busy Beaver function Σ rather than S, however.
r.e.s. (talk) 16:34, 15 November 2010 (UTC)
I only have the latest edition of BB&J. I had looked at it before my last post, but it didn't seem to have to kind of proof you were looking for. I'll have to look at it again today. If the first edition works better, let's use that. For the diagonalization issue, I would be more comfortable if our article simply called the busy beaver proof an alternate proof rather than trying to explain how it's different in too much detail. That kind of description belongs in textbooks where the author can express herself more freely. — Carl (CBM · talk) 12:51, 16 November 2010 (UTC)
I looked at BB&J's proof(s). The first few paragraphs of the section 4.2* are a word-sketch of Booth's proof (pp. 40ff). His Busy Beaver proof is a mind-numbing 3 pages-- while BB&J says it's "even simpler to describe than s [scoring function]" it sure doesn't seem easier to prove. There's a technical issue that bothers me: BB&J's bb-proof begins off defining "productivity": "If the machine eventually halts, scanning the leftmost of an unbroken block of strokes on an otherwise blank tape, its productivity is said to be the length of that block." I thought, when scoring the busy beaver, the count was the total number of marks, not the length of a block of marks. Have I misunderstood? My point is: if there's a lot of technical busy-beaver particulars required to mount a decent halting-problem proof then it would seem more logical to refer the reader to Busy beaver where this is done in earnest; I'd prefer to see a simpler proof such as Minsky's 1967 which "suppressess" the diagonalization (the one where the machine does the exact opposite after its infallible "decider" routine makes its decision). BillWvbailey (talk) 14:20, 16 November 2010 (UTC)
Bill - careful, the relevant proof concerns BB&J's function p (not s, which doesn't involve starting with a blank tape). That discussion begins at the bottom of p.41. But BB&J do define these functions differently than the "standard", in at least two ways -- they're in terms of 4-tuples rather the standard 5-tuple TM formulation, and unlike the standard Σ and S they require a single contiguous block of 1s when the TM halts. I believe that the simple and short proof that I sketched at the top of this section (and the more complicated one in the BB article) can be regarded as an adaptation of this method to the standard Busy Beaver function S.
r.e.s. (talk) 14:57, 16 November 2010 (UTC)
My post above wasn't so clear; by the look of it the Booth proof (below) is the same as BB&J s-version, i.e. the TM starts with initial marks on the tape. I'd be happy with those, but I like your proof too -- one would surmise (apparently wrongly) that a proof starting with a blank tape should be conceptually easier to understand than one that needs marks. On the principle that if I can understand a proof, anyone can [my attention span for mathematical proofs being about 30 seconds] ... where did you find the old Boolos-Jeffrey proof? I went hunting on the 'net and didn't see any public-domain versions of the old editions. BillWvbailey (talk) 18:37, 16 November 2010 (UTC)
I'm reading previews in Google Books searches; e.g., here's a link to the chapter "Uncomputability via the busy beaver problem" in the 1974 edition.
r.e.s. (talk) 03:17, 17 November 2010 (UTC)
Well those little devils! They blanked the subsequent two pages (38 and 39) which is where the nut of the proof is located. But I did understand it up to there (10 minutes max -- the business of the 25 equivalent 1-state graphs being a bit murky because their typography confuses I with 1, or so I suppose. And there's a weird o.) I went back to BB&J 4th edition and see that it is not identically the same proof, and my mind blanked at their "(1) p(n+2j) >= p(p(n))" which appeared out of nowhere. This makes me want to see the proof in the B&J editions. (I've used some of these B&J editions at the college library up the road, so I know they're there.) This is not the first time I've been unhappy with the BB&J 4th edition. BillWvbailey (talk) 18:21, 17 November 2010 (UTC)
If you have access to an academic library this might be of interest -- otherwise it's $15 US to purchase or $3 to look at for 24 hrs: Rafael Morles-Bueno, "Technical opinion: non-computability is easy to understand", Association of Computing Machinery, Commnications of the ACM Vol: 38 Issue 8: ISSN: 0001-0782. Date: 08/1995 Start page: 116. The precis (the only part I have access to) says this: "It is suggested that a programming language version of the busy-beaver problem would be a better example of unsolvability than the halting problem generally...". Bill Wvbailey (talk) 14:23, 18 November 2010 (UTC)
I made a pdf of the pertainent chapters of B & J 1974:34ff and have found what appears to be a serious oversight in the BB&J proof (BB&J 2002:43). BB&J kept the old drawings and didn't coordinate them with the formulas and text. If we are to keep the old drawing, the proof (p. 43) should be written as follows:
"The first thing we note is that if there is such a machine, call it BB, and the number of its states is k [not j], then we have
"(1) p(n+2k) ≥ p(p(n))
"The result is an (n+2k)-state machine of productivity p(p(n))...." (p. 43)
[Now the drawing Fig 4-3 and the formula coincide. And now they agree with the first three editions of B&J -- I checked all three.]
The rest of the proof would have to be rewritten to fix this mess. The original version B&J ends up with the nice 0≥1 if BB exists, as opposed to the more confusing ending of BB&J with its introduction of variables m, i, k = i+2j and their ending m > k, etc. (What finally got me over the hurdle was drawing two "function boxes" where n goes into the BB-hopper of k states, p(n) comes out, then p(n) goes into an identical BB-hopper of k states, and p(p(n)) comes out.) I have the B&J chapters in pdf. Send me a request on my talk page if you want to see them. Bill Wvbailey (talk) 18:52, 18 November 2010 (UTC)

--- If you can show that a function is noncomputable is there a straightforward path to proving the matter it represents "undecidable"? The reason I ask is because, while Booth published version in his 1967 text rather quickly produces an example of a noncomputable function that seems similar to the one r.e.s. is sketching above: "[While] the maximum number that can be computed by an x-state machine if x is the nunber represented by the intial tape expression, is a total function, but it is not computable" (p. 383) it seems to take him a long time on a complicated winding road to where he finally displays an undecidable predicate (p. 389). Is there a quick route? [In his {b, 1} simplified machines his "states" are the tuples in the finite state machine: "If the control element has n states, the transition diagram describing its operation will have nrows and two columns"]. Bill Wvbailey (talk) 15:51, 11 November 2010 (UTC)

Any noncomputable total function gives an undecidable predicate: a function f from N to N is computable if and only if the predicate "f(n) = m" is decidable given n and m. Proof: the forward direction is easy. For the reverse, to compute f(n), test whether f(n) = 0, then test whether f(n) = 1, etc., and you will eventually find the right value because f is total. — Carl (CBM · talk) 17:10, 11 November 2010 (UTC)

--- Would it be of any help for me to cc (as text into an article in my namespace so as to not clutter up this article) the entire subsection "A Noncomputable Function"? Here Booth (1967) produces his noncomputable Fn(x) = maxover i [fn,i(x)] = fn,imax(x) [where fn,i(x) is the ith total function computed by all n-state machines]? BTW, this section follows immediately after a subsection titled "Denumerability of Turing Machines". It's in this chapter that he mentions the Busy Beaver in his homework problem in particular "5. Show that the busy beaver problem is unsolvable by demonstrating that we cannot find a Turing machine that can calculate η(n) for all values of n [where for an n-state machine η(n) denotes the value of [the highest number of 1's on its tape when it halts]]" (p. 396). Booth in this chapter references, with respect to computability, the texts of Davis 1958, Hermes 1965, Kleene 1952, and Minsky 1967; with respect to "the problem of computable and uncomputable functions" he references Rado's 1962 and Turing's 1936-7 papers. BillWvbailey (talk) 19:46, 11 November 2010 (UTC)

How long is it? If it's just a few paragraphs, that could be OK. If it's several pages, for copyright reasons maybe we shouldn't post it all here. — Carl (CBM · talk) 21:17, 11 November 2010 (UTC)

It's about 1 page long (standard text-book size text, about 7 paragraphs of various lengths). I've already precis'd the start and the finish in the "proof is invalid section"; here's a precis-summary of the rest, with a few direct quotes:

Booth begins with the set τ of all Turing machines that can compute total functions f(x) = y. From this set we select all n-state machines,

"and we let fn,i(x) correspond to the total function by the ith machine of this set. Using this function we can define a new function:
Fn(x) =maxover i[fn,i(x)] = fn,i-max
corresponding to the maximum value that any fn,i(x) will have for a given value of x....

Fn(x) is a total function and it will HALT. Then Booth goes on to demonstrate that

Fn(x) < Fn+k(x), k > 1
"To see this assume that we have an n-state machine that computes
Fn(x1) = ymax = fn,i-max(x1)
"for a particular value of x1. Using this we can define an n+2-state machine that computes a number y' such that y' > ymax.
"To illustrate how this is done .... [etc: here he shows formally how to modify the transition table; his machine-model is a busy-beaver-like Post-Turing hybrid with L, R, H, + alphabet A={ 1 } plus the blank, and a Turing transition table of 2 columns per row]
"This new machine will form ymax, will then increase ymax by at least 1, and then halt with y'>ymax as the final tape expression.
"These results can be used to define the function
G(x) = Fx(x)
"This function which gives the maximum number that can be computed by an x-state machine if x is the number represented by the initial tape expression, is a total function, but it is not computable."

He now shows that that G(x) is not computable by contradiction: assume that there is some machine TG that could compute G. [etc: TG will have nG states. Then FnG(x) will be greater than G(x) for all x > nG, and this implies that FnG(nG+k) will be greater or equal to FnG+k(nG+k) with contradicts FnG(x) < FnG +k(x).] (Taylor L. Booth 1967 Sequential Machines and Automata Theory, John Wiley and Sons, Inc, NY, no ISBN, LCCCN: 67-25924:382-383).

Bill Wvbailey (talk) 22:07, 11 November 2010 (UTC)