Talk:Nonstandard calculus

Latest comment: 12 years ago by Tkuvho in topic madison and stroyan

Old thread

edit

If someone named Robinson made "one of the major mathematical advances of the twentieth century" then perhaps one of the editors of this article would be kind enough to say who is this "Robinson" character and what was the "three hundred year old problem". —Preceding unsigned comment added by 160.33.66.118 (talk) 01:29, 7 November 2008 (UTC)Reply

Examples obviously needdd and maybe should preced the theorems. Exzmples are d/dx x^n and integration of same fns + prodcut and chain rules.CSTAR 21:25, 8 May 2004 (UTC)Reply


Mergefrom tag

edit

Infinitesimal calculus and nonstandard calculus are entirely different subjects.--CSTAR 02:47, 10 Jun 2005 (UTC)

Could you please explain? Thanks, Taco Deposit | Talk-o to Taco 02:57, Jun 10, 2005 (UTC)
Yes. Infinitesimal calculus is a very broad subject encompassing many different (incompatible with each other, some even failing to be self-consistent) approaches to using infinitesimals as a foundation for analysis. Infinitesimal calculus has deep historical roots which parallel much of the development of modern mathematics. There are various modern (correct) approaches to infinitesimals: Jet bundles for instance, microlocal analysis to name a few. Non-standard calculus is a much narrower subject, based on nonstandard analysis as developed by A Robinson. It is an example of an infinitesimal calculus, however, it should not be conflated with it.--CSTAR 03:09, 10 Jun 2005 (UTC)
As a natural continuation of infinitesimal calculus, perhaps non-standard calculus should be merged there. Tkuvho (talk) 01:25, 25 June 2010 (UTC)Reply

Note: the main merge discussion is at: Talk:Infinitesimal calculus#Merge. -- Radagast3 (talk) 01:15, 3 July 2010 (UTC)Reply

Disappointing

edit

The current version of the article is a little disappointing as it plunges into transfer operators and internal functions almost as soon as it starts. Somehow a novice is looking for something more elementary than a sequel to non-standard analysis. If I were more competent in the area I would write an introduction to non-standard calculus. Anyone interested? Katzmik (talk) 11:08, 28 August 2008 (UTC)Reply

Error in proof of intermediate value theorem

edit

The proof of the IVT currently in the article seems to rely on the claim that the positive hyperintegers are well-ordered, which claim it justifies by the transfer principle. But the principle does not apply here (as it applies only to first-order formulae, and well-orderedness is second-order) and the claim is obviously wrong: if M is an infinite positive hyperinteger, then   is a nonempty set of positive hyperintegers without a least element. Can the proof be rescued? Algebraist 11:13, 2 September 2008 (UTC)Reply

Thanks very much for your correction. I think the answer to your question is the remark that the set you define is not internal. I thought transfer should apply to arbitrary order. Hopefully CSTAR or Arthur Rubin are watching and will straighten us out. Katzmik (talk) 11:43, 2 September 2008 (UTC)Reply
I think the proof is correct; the (positive) hyperintegers are not well-ordered, as Algebraist quite properly noticed, but any non-empty internal set of positive hyperintegers has a first element. (I can prove this in the ultrafilter representation, but I'm not sure of the details of the transfer principle in Robinsons's.) But it's been a while since I've done this. — Arthur Rubin (talk) 13:11, 2 September 2008 (UTC)Reply
It is correct that any non-empty internal subset of the non-negative integers has a least element. This follows by transfer. For the IST approach see Nelson's 1977 paper. For the model-theoretic approach see Albeverio, Fendstad, Hoegh-Krohn, Lindstrom.--CSTAR (talk) 15:34, 2 September 2008 (UTC)Reply
This comment should certainly appear somewhere, how about hyperinteger? Katzmik (talk) 15:37, 2 September 2008 (UTC)Reply
Thanks, I am very relieved. Please improve the article by adding whatever clarifications may be appropriate. Katzmik (talk) 13:15, 2 September 2008 (UTC)Reply
Another matter: the article currently makes much of the shortness of this proof of the IVT compared to standard proofs. And indeed it is much shorter than the standard proof by interval bisection. But it's just as short as (and very similar to) the standard proof directly from the least-upper-bound axiom. Algebraist 14:28, 2 September 2008 (UTC)Reply
Looks like you are right. Let's find a standard theorem that requires nested intervals for its proof, and include it in this page. The advantages of NSA should be clearer then. Katzmik (talk) 14:34, 2 September 2008 (UTC)Reply
I seem to recall a theorem involving Hilbert spaces in which the non-standard proof was much shorter than the standard proof. I can't recall what it was, though. — Arthur Rubin (talk) 14:49, 2 September 2008 (UTC)Reply
Re: Your question I can't recall what it was. You may be thinking of the spectral Theorem for arbitrary self-adjoint operators. The non-standard proof is very elegant and short. --CSTAR (talk) 15:37, 2 September 2008 (UTC)Reply
Not really calculus, though.... — Arthur Rubin (talk) 14:53, 2 September 2008 (UTC)Reply

I think the proof of the fundamental theorem of calculus is a good candidate. It seems as though a non-standard proof may be shorter and more lucid. Katzmik (talk) 15:30, 2 September 2008 (UTC)Reply

spectral theorem for selfadjoint operators

edit

Regarding comment by CSTAR: this sounds interesting. How does the non-standard proof go? Katzmik (talk) 12:03, 3 September 2008 (UTC)Reply

The details vary depending on the version of NSA used, and the comments that follows ignore them entirely
Suppose T is a bounded self-adjoint operator on a Hilbert space H. Consider the operator *T on *H. There is an (internal) hyperfinite dimensional space K containing all standard elements of *H. The compression
 
is a self adjoint operator on a Hyperfinite dimensional space, so the (transfer version) of the finite dimensional spectral theorem applies. Using the Loeb measure construction, one can use this to show the existence of a projection-valued measure which represents the original operator T. see Lindstrom's article p. 65 in
Nonstandard Analysis and its Applications, Nigel Cutland Ed., London Mathematical Scciety Student Texts, 10, 1988.
There are some details missing there for the complete spectral theorem, but they can easily be reconstructed.--CSTAR (talk) 17:34, 3 September 2008 (UTC)Reply

calculus and non-standard calculus

edit

This discussion is copied from WP math at Wikipedia_talk:WikiProject_Mathematics#calculus_and_non-standard_calculus

I am a standard mathematician, in the sense that I was not formally trained either in Robinson's hyperreals or any other variety of pseudo or surreals. At the same time, I find the infinitesimal approach to be a helpful educational tool (aside from the issue of its merit as a research tool). I have correspondingly added some links in the standard calculus pages such as uniform continuity to the non-standard page explaining helpful and simpler approaches to some of these standard results. I hope to forestall edit wars and reverts by opening the discussion of the issue here, it being understood that everyone will abide by the results of such a discussion. Katzmik (talk) 11:43, 1 September 2008 (UTC)Reply

I agree that non-standard analysis is an interesting area - it certainly explains why an informal argument using infinitesimals so often produces the same answer as a rigorous epsilon-delta limits argument. However, I am not so sure about the claims that proofs in non-standard calculus are simpler or clearer than standard proofs. They are only simpler if you start from the base-camp of the fundamentals of non-standard analysis - and it seems to me that it takes quite a high level of mathematical maturity to reach that base camp on foot. Of course, the teacher can "helicopter in" their students by saying "trust me, we can make all this rigorous if we have to" - but then you are essentially falling back on an informal argument, and not demonstrating a rigorous proof. Gandalf61 (talk) 12:31, 1 September 2008 (UTC)Reply
As your message is a little bit vague concerning the question of rigor, I would like to clarify that non-standard analysis is completely rigorous and does not rely on any axioms beyond standard ZFC that standard mathematics is built upon. Katzmik (talk) 12:48, 1 September 2008 (UTC)Reply
Yes, I understand that - I didn't mean to imply that non-standard analysis is not rigorous. I was trying to say that by the time you expand on the rigorous foundations that underly a non-standard analysis proof of the intermediate value theorem, for example, then you no longer have the nice simple two-line proof that you thought you had. In other words, if you start from something close to first principles, I doubt that non-standard proofs are generally any simpler, shorter or easier to grasp than standard proofs. You asked for a discussion, so that is my contribution - let's see what others thinks. Gandalf61 (talk) 13:00, 1 September 2008 (UTC)Reply
It is true that non-standard analysis requires foundational work. On the other hand, so does the construction of the real numbers. Most calculus courses nowadays teach neither equivalence classes of Cauchy sequences, nor Dedekind cuts. The question is, how far back to the axioms do you want to go when you teach today's undergraduates. At any rate, the fact that both Newton and Leibniz DID use infinitesimals says something in their favor when it comes to assessing their usefulness in actually explaining what goes on in these theorems. Katzmik (talk) 13:45, 1 September 2008 (UTC)Reply
The sense in which infinitesimals are used in the work of Newton and Leibniz is of course completely different from the way they are used in the work of Robinson and other nonstandard analysts. To obscure this difference is to do a disservice to Robinson's work, which has always been received by the mathematical community as rigorous and correct, whereas infinitesimals were viewed with great uneasiness for about 200 years until the important work of Cauchy and Weierstrass which did away with them.
The pedagogical value of infinitesimals is an interesting question -- probably it is too rich to be satisfactorily dealt with here. One should begin, I think, with making the context clear: separate discussions are probably required for (i) standard freshman calculus, (ii) honors calculus (e.g. from Spivak's book), (iii) undergraduate analysis, (iv) graduate level and beyond. At the level (i), my opinion is that the vast majority of students will not understand -- and will not want to understand -- any deeply theoretical discussion of limits, continuity, etc. If as an instructor you nevertheless want to sneak in at least a little taste of this theoretical material (as I usually do), you have a real pedagogical challenge. One of the prerequisites for success here is to yourself have a very firm and flexible (so to speak) understanding of the theory, so that you can spin it around in any direction and present small snapshots of it in a coherent and palatable way. This is not easy to do, and I have grown to believe that most authors of calculus textbooks are not good enough at this -- when they decide they want to cover "theory", they pedantically rehash canned definitions from the textbooks one level up (which they never explicitly reference, for some reason). For this reason I would not at the present time even consider using a "nonstandard calculus textbook" because my own understanding of nonstandard analysis is not deep enough, and I think the average instructor (even one who is a research mathematician) would feel the same way.
On the other hand, some physical intuition and language is certainly appropriate in places: I have no qualms about writing down a formula for, e.g. the "differential work" and then putting an integral sign in front of it to get the work. I just don't say "infinitesimal", and I try to lightly remind the students with the connection to Riemann sums (although I would be perfectly happy to replace this with the corresponding allusion to some simpler but equivalent integration theory).
Concerning the passage: "Consider for instance the function f(x) = 1/x with domain the positive real numbers. This function is continuous, but not uniformly continuous, since as x approaches 0, the changes in f(x) grow beyond any bound. A clearer explanation of this phenomenon may be found at non-standard calculus." Indeed a clearer explanation is warranted, but can certainly be provided in the context of the epsilon-delta definition. Any or all of the following would be nice: (i) Directly compute |f(x+delta)-f(x)| and observe that, for any fixed delta, this quantity tends to infinity as x tends to zero. (ii) Discuss instead the example f(x) = x^2 on the real line (also done by a similar, but even simpler, direct computation) and/or restrict to f(x) = 1/x on (0,1] and note that that the domain is bounded but the image is unbounded. I will try to make some of these changes.
My feeling is that best would be to include material and/or links on nonstandard calculus/analysis in separate sections. It could be a nice advertisement for the nonstandard theory to give some quick proofs of standard theorems (modulo the machinery of nonstandard analysis, of course). Plclark (talk) 14:48, 1 September 2008 (UTC)Reply
To respond to your point about Newton and Leibniz, I would like to mention that there are at least two reasons to carefully distinguish between their infinitesimals and Robinson's. The first is that he is rigorous. The second is the existence of alternative theories of infinitesimals. Other than that, it can be, and it has been, said that Robinson put Leibniz's infinitesimals on a rigorous footing. I am not sure what you are objecting to exactly in such a formulation. Katzmik (talk) 15:00, 1 September 2008 (UTC)Reply
I don't find anything objectionable in that formulation. My points are: first, that Newton and Leibniz didn't have a "theory" of infinitesimals in the modern mathematical sense: they had a kind of algorithmic yoga that made both their contemporaries and descendants uncomfortable. Second, they didn't prefer differentials to epsilons and deltas, since they didn't have the epsilon-delta definition. Plclark (talk) 15:22, 1 September 2008 (UTC)Reply
At level (i) (undergraduate calculus), I would like to mention that I have personally taught calculus based on Keisler's wonderful book, to the satisfaction of both the students and the TA (who had, of course, to re-learn everything he learned in college). Katzmik (talk) 15:03, 1 September 2008 (UTC)Reply
From a google search I gather you mean http://www.math.wisc.edu/~keisler/calc.html. I am not familiar with this book. Could you say something about how a course based on this book would be different -- for the students -- than a more traditional course? I notice that Keisler's preface claims that infinitesimals are more easily understood than limits -- did your students find that to be the case? How much of a freshman calculus class depends upon what's inside the black box named "limit" anyway? (These are not rhetorical questions; I'm interested to hear your response.) Plclark (talk) 15:22, 1 September 2008 (UTC)Reply
A course based on Keisler's book would be very different. For example, a considerable amount of time is spent in a standard course on various manipulations involving limits. A special term "an indeterminate expression" is introduced to describe things like  . Various theorems are proved about manipulations of limits, sums, products, what happens when numerator goes to zero, what happens when it goes to infinity. In Keisler's approach this is translated into a set of rules about manipulating hypperreals. For example, an infinitesimal divided by a non-zero standard real, is necessarily infinitesimal. An infinite number divided by a finite number is infinite, etc. Once the student has mastered these rules, one can proceed to derivatives, continuity, etc. Whether or not one gets into epsilon-delta proofs in the standard course, the hyperreal approach is more convenient since one has it his disposal a collection of ideal objects that simplify all definitions, statements, and proofs. To give an analogy, consider minimal surface theory. Minimal surfaces have certain properties not possessed by arbitrary surfaces. If one did not assume the existence of minimal surfaces and had to satisfy himself with statements about surfaces that are only approximately minimal in a suitable sense, obviously this would complicate the statements about them. Katzmik (talk) 12:06, 2 September 2008 (UTC)Reply
Okay, here's a question that occurs to me. The standard proof of the intermediate value theorem depends in an explicit way on the completeness of the reals, so it is not surprising that the theorem is not true for functions over a non-complete field such as the rationals. But one of your brighter students argues as follows: let's construct "hyperrational numbers" in an analogous way to the construction of hyperreals; then we apply the non-standard calculus proof of the intermediate value theorem to functions over these "hyperrational" numbers; since the intermediate value theorem is true for functions over "hyperrationals" then it must also be true for functions over the rationals. Where has your student gone wrong ? The non-standard proof of the intermediate value theorem does not seem to depend explicitly on completeness - so presumably their mistake is either in the assumption that "hyperrationals" can be constructed with analogous properties to hyperreals, or in the assumption that properties of functions over "hyperrationals" transfer back to properties of functions over rationals. How do you explain their mistake in terms that they can understand ? Gandalf61 (talk) 13:18, 2 September 2008 (UTC)Reply
Great question, hope I have the right answer (I mentioned already that I have no professional training in non-standard analysis, although I seem to have been able to defend myself successfully at the talk page of non-standard calculus recently). At any rate, π can be made to look like a hyperrational by taking the n-th continued fraction with n hyperinteger. We see that the standard part of a hyperrational may be irrational. In fact, perhaps the shortest construction of the reals out of the rationals may be via hyperrationals. Katzmik (talk) 13:27, 2 September 2008 (UTC)Reply
Indeed. Another way of looking at this is that if you apply the ultrapower construction to the rationals, then any sequence converging to an irrational will correspond to a finite hyperrational that is a non-infinitesimal distance from any rational; in other words a finite hyperrational without a rational standard part. Algebraist 13:49, 2 September 2008 (UTC)Reply
Okay, I see - so you can define hyperrationals and the IVT does apply to functions over hyperrationals but when you pull back the standard part of a hyperrational you might find you no longer have a rational number. Hmmm ... these non-standard proofs seem to have a few traps for the unwary. I still prefer the standard proofs - they might be longer, but their assumptions are visible up-front. Gandalf61 (talk) 14:21, 2 September 2008 (UTC)Reply
As far as uniform continuity is concerned, a rigorous explanation can certainly be written down to explain why x2 is not. My point was that there is built-in logical complexity here, represented by the sheer number of the quantifiers, which usually baffles the average student. One argument in favor of infinitesimals is that all this foundational work is done once and for all in the construction of the hyperreals. The epsilon-delta arguments we are all so good at, from the non-standard viewpoint, are merely tedious repetitions of the same thing, combined with the fact that one repeatedly has to solve an inverse problem, which is notoriously difficult (can you hear the shape of a drum?). Katzmik (talk) 15:07, 1 September 2008 (UTC)Reply
I hate to disagree with Katzmik, but personally I find the epsilon-delta definition of uniform continuity much easier to understand than the non-standard reals version. JRSpriggs (talk) 16:25, 1 September 2008 (UTC)Reply
I responded at your page. Katzmik (talk) 12:56, 2 September 2008 (UTC)Reply

Dirichlet function

edit

These comments were copied from talk

Hi, Thanks for your stimulating questions as WP math. Hope you liked my answers. Incidentally, Keisler has a several-page document about hyperreals in the classroom that he wrote as a response to an attack on Robinson by Errett Bishop thirty years ago. Katzmik (talk) 13:51, 2 September 2008 (UTC)Reply

Yes, your answers are great, and I can see why you find non-standard calculus so interesting. Afraid I still think the good old standard epsilon-delta methods are clearer, though ! Cheers. Gandalf61 (talk) 14:24, 2 September 2008 (UTC)Reply
Great! Wiki's the place for you. Welcome to plurality :) Katzmik (talk) 14:37, 2 September 2008 (UTC)Reply
I thought this might interest you, and the WPM thread is huge enough already, so here goes: there are in fact two ways in which the non-standard proof of the IVT for the rationals can fail. The first occurs for functions such as f from [1,2] to {0,1}, f(x)= -1 if x2 < 2, f(x)=1 otherwise. This is continuous on the rationals in the usual sense, but not in the non-standard sense. That is, the natural extension of f to the hyperrationals maps infinitesimally-close arguments to non-close values. Thus the non-standard proof doesn't get off the ground here. The other way is typified by g(x)=x2-2 (again on [1,2] for the sake of argument). This is continuous in both the standard and non-standard senses, and the non-standard proof of the IVT almost works: it gives a hyperrational a such that g(a) is infinitesimal. However, a is not close to any rational, so the proof again fails. Algebraist 17:10, 2 September 2008 (UTC)Reply
Yes, that is interesting. So some properties of functions over the rationals can fail to translate to properties of equivalent functions over the hyperrationals, and other properties can fail to translate in the other direction. The non-standard proofs using hyperreals obviously avoid these non-trivial pitfalls, or maybe they depend on some axiom or principle or property of hyperreals that does not apply to hyperrationals. Makes me even more convinced that thenon-standard proofs are not as short and simple as they appear at first glance. Gandalf61 (talk) 10:31, 3 September 2008 (UTC)Reply
Perhaps I am wrong but I think Algebraist meant his remark not as a criticism of the non-standard proof of IVT, but rather as an interesting remark concerning the behavior of the hyperrationals. As you recall, one can actually construct the real line as the quotient of the hyperrationals by the ideal containing the infinitesimals. One explanation for why this is true is that one can construct the hyperreals by analogy with the reals. To construct the reals, we start with rational Cauchy sequences, and then introduce a suitable equivalence relation. To construct the hyperreals, we start with arbitrary rational sequences, and introduce a suitable equivalence relation using an ultrafilter (whose existence depends on the AC). Since the collection of Cauchy sequences is contained inside the collection of all sequences, it is not unreasonable to expect that the construction of the reals will be englobed in the construction of the hyperreals starting with rational sequences. This is actually all there is to constructing the hyperreals if that's all you want. The additional complications come from the fact that one also wants the tranfer principle, etc. Katzmik (talk) 11:49, 3 September 2008 (UTC)Reply
Well, that's fine, but there are still several parts of non-standard calculus that I find very non-intuitive. One example is the non-standard definition of continuity, which seems to be rather different from its standard counterpart. We know that the Dirichlet function IQ is nowhere continuous over the reals. But if we define an equivalent function over the finite hyperreals by
 
then I think this is continuous everywhere in its domain (using the non-standard definition of continuity now) because if x and y are infintely close then they have the same standard part and so *IQ(x)=*IQ(y). Is this correct, or have I gone astray somewhere ? Gandalf61 (talk) 13:13, 3 September 2008 (UTC)Reply

The short answer is that the function as you defined it is not an internal object because your definition uses the standard part function which is not an internal object. If you are interested I can try to elaborate further. Katzmik (talk) 13:26, 3 September 2008 (UTC)Reply

So does the non-standard definiton of continuity only apply to internal objects ? I didn't see that restriction mentioned in non-standard calculus. Gandalf61 (talk) 13:43, 3 September 2008 (UTC)Reply
True, non-standard calculus is only an appendix to non-standard analysis. Internal sets are discussed there. Incidentally, Keisler is a superb expositor. In addition to the calculus book, his website also provides a 200-page introduction to the theoretical underpinnings of the theory. Katzmik (talk) 13:50, 3 September 2008 (UTC)Reply
To Algebraist above (sorry about the delay):
"The first occurs for functions such as f from [1,2] to {0,1}, f(x)= -1 if x2 < 2, f(x)=1 otherwise. This is continuous on the rationals in the usual sense, but not in the non-standard sense."
is not quite correct. The non-standard formulation of f being continuous on X is that if y is in *X, and st(y) is in X, then st(*f(y)) = f(st(y)). Note carefully the underlined clause. — Arthur Rubin (talk) 14:09, 3 September 2008 (UTC)Reply
This is very interesting. It may be worth including a discussion of such an example in the article itself. Katzmik (talk) 14:16, 3 September 2008 (UTC)Reply
Thanks for that. I suppose this is what comes of trying to learn a subject by glancing at Wikipedia and making it up as you go along. So the actual failure point in the proof here is that we get two arguments infinitesimally close together, at one of which f is negative and at the other positive, but we can't conclude that the values of f at these arguments are close to each other (and hence to zero) as they don't have standard part in our domain. Algebraist 14:22, 3 September 2008 (UTC)Reply
(To Gandalf, finally, after a few ecs) There are three obvious ways (that I can see; there could be more) of topologizing the hyperreals: by the order topology, by the topology in which a set is open iff it contains everything infinitesimally close to its members, and by the natural extension of the topology on the reals. These topologies are all horrible: the first is totally disconnected, the second is the product of a discrete space with a trivial space, the third can't distinguish between different positive infinities, and so on. As a result, any definition of continuity for functions from the hyperreals to the hyperreals is going to be pretty stupid. What does work is the non-standard definition of continuity of functions from the reals to the reals: such a function is continuous if its extension to the hyperreals is continuous (wrt the second of these topologies) at all finite points, and is uniformly continuous if the extension is also continuous at infinite points. I think the lesson might be that the hyperreals are a tool for studying the reals, not an object of study in their own right. Algebraist 14:24, 3 September 2008 (UTC)Reply
Wow - this just keeps on getting more and more tangled. So, having dug under the surface a bit, let's see what has come to light. Hypereals have (at least) three different topologies, which are all "horrible". There is a non-standard definition of continuity that seems to be hard to pin down, and can only be applied to a restricted set of functions anyway. And to really understand what does and does not work in the land of hyperreals and why you have to get to grips with internal set theory, ultrafilters and ultraproducts. Now I am happy to accept that this is all rigorous and consistent, and I can see that it is beautiful and fascinating, and I can even understand that it might be provide some deep insights. But what it is definitely not is a simple and intuitive approach to undergraduate calculus or analysis ! Give me the old eplsion-delta proofs any day - you know where you are with them. I am out of here before my head explodes ! Gandalf61 (talk) 14:51, 3 September 2008 (UTC)Reply
I, too, enjoy old-fashioned British humor :) I would like to mention that most undergraduates probably feel this way about "old epsilon-delta". The difference is that you have learned it already. You may recall that the learning process took more than two days in september :) At any rate, as far as the non-standard definition of continuity goes, it certainly does apply to all real functions, contrary to what you wrote :) Katzmik (talk) 14:57, 3 September 2008 (UTC)Reply
<Sound of head exploding> But ... but ... but if the non-standard definition of continuity applies to "all real functions" then how do you apply it to the Dirichlet function ? You said above that the non-standard definition of continuity cannot be applied to my attempt to extend the Dirichlet function to a hyperreal domain because I used the standard part function when extending it. Is there some other way of extending the Dirichlet function to avoid this problem ? Or is the Dirichlet function somehow not a "real function" ? Gandalf61 (talk) 15:39, 3 September 2008 (UTC)Reply
There is a canonical way of extending a real function to a hyperreal function (this is obvious in the ultrapower construction; I don't know how the others work). Your hyperreal version of the Dirichlet function is not the canonical extension to the hyperreals. It is a theorem that a real function is epsilon-delta continuous if and only if its canonical extension is continuous (in the sense that infinitesimally close arguments give infinitesimally close values) at every hyperreal whose standard part is in the domain of the original function. Algebraist 17:14, 3 September 2008 (UTC)Reply
<Finds pieces of head. Reassembles head with duct tape. Puts left over pieces at bottom of tool box.> Okay, quite willing to believe that my amateur jerry-built extension of the Dirichlet function is not the canonical extension (should I say the canonical extension or a canonical extension ? No idea - let's come back to uniqueness some other time). So before I can use the non-standard proofs of IVT etc. I really ought to (a) understand how to build a canonical extension of any given real function and (b) understand how to prove the theorem you just mentioned showing that the two definitions of continuity are equivalent as long as we stick to canonical extensions. And, following Katzmik's post below, these canonical extensions might be nonconstructive functions (is that the right term ?). So - hands up - who still thinks this is simpler than epsilon-delta proofs ? Anyone ? Gandalf61 (talk) 19:10, 3 September 2008 (UTC)Reply
Interesting. Your comments at Bishop_''vs''_Keisler talk page would be welcome :) Katzmik (talk) 12:13, 4 September 2008 (UTC)Reply
Perhaps you agree then with Errett Bishop's critique of Keisler's book, see discussion at WP math. Katzmik (talk) 15:14, 3 September 2008 (UTC)Reply

Dirichlet function bis

edit

To answer your question about the extension of the Dirichlet function, I will apply the extension principle which states that every real function f has a hyperreal extension f* (the professionals seem to prefer the notation *f which proves conclusively that I am not a hyperprofessional). In particular, there exists a hyperreal extension of the Dirichlet function. As you know, the axiom of choice is used in all these constructions, therefore certain things cannot be made very contstructive, which is why Bishop checked out of this particular club (as well as Connes, apparently; see non-standard analysis and the interesting discussion at talk page there). Katzmik (talk) 15:48, 3 September 2008 (UTC)Reply

As far as duct tape, it may be helpful to illustrate the above as follows. Let us show that the natural extension of the Dirichlet function is not continuous at π. Consider the continued fraction approximation an of π. Now let the index n be an infinite hyperinteger. Then the hyperrational an is infinitely close to π. Thus the natural extension of the Dirichlet function takes different values at these two infinitely close points, and your head is whole :) Hope this helps Katzmik (talk) 12:11, 4 September 2008 (UTC)Reply
I think there's an even easier method. If you want to prove that this function is discontinuous in standard analysis, you would probably make use of the fact that between any two rationals there's an irrational. By the transfer principle, the same statement is valid for the hyperreals. So let r be a rational, and let another rational be given by r'=r+1/n, where n is an infinite hyperinteger. Between r and r' there is an irrational hyperreal number, so discontinuity follows directly from the definition.--76.167.77.165 (talk) 23:26, 15 March 2009 (UTC)Reply

Example

edit

In the first example of Nonstandard Calculus, it should be an "equality" if delta x is “infinitesimal”, and an "approximate equality" if delta x is “sufficiently small”, using the Wikipedia definition of infinitesimal. Anyone would understand the point made, but one purpose of creating a Nonstandard Calculus is to get rid of a lack of rigor in symbol manipulation, so being a hair splitter is appropriate. (Of course, what is written is true, since it would be an approximately equality if delta x were zero, since zero = infinitesmal, but then there is no point being made.) EricDiesel (talk) 21:48, 8 September 2008 (UTC)Reply

I am not sure I agree with you, but before I comment, could you please clarify the following. When you say approximate equality, are you referring to the symbol  ? Katzmik (talk) 11:25, 9 September 2008 (UTC)Reply
  • Yes, I was referring to the symbol " ". Math encyclopedias are expected to give the flavor of a topic without all of the rigor. But for nonstandard analysis (calculus), the flavor is the rigor. (Plus the fact that you can divide by "zero".) First, let us add a "Let x be a real number" at the very beginning, (for simplicity of the example), and change "infinitesismal" to "sufficently small" so the sentence is true. Then modify to "By defining 'when delta-x is infinitesmal', this formal calculation can be completely justified by non-standard analysis, when delta-x is infinitesimal".
  • For some, infinitesmals don't exist, so "The idea is to define them into "existence", from the empty set up, so you can multiply and divide with them like numbers, formally." EricDiesel (talk) 01:09, 10 September 2008 (UTC)Reply
How would it be useful to talk about numbers which are 'sufficiently small' in an article on nonstandard calculus? Phrases like 'sufficiently small' are hallmarks of epsilon-delta analysis, and part of the point of the nonstandard approach is to avoid them. The naive argument (as used by Newton, Leibnitz and such) involves infinitesimal increments, and modern rigorous nonstandard analysis formalizes this naive argument directly, avoiding Weierstrass-style locutions like 'sufficiently small'. The purpose of this small section is to point out how direct the formalization is. The only thing I dislike about it at present is that it uses a rather curious symbol (surely that means 'is isomorphic to'?) for approximate equality, and one different from that used in the rest of the article. Algebraist 01:19, 10 September 2008 (UTC)Reply
Hmm. Well put. EricDiesel (talk) 02:41, 10 September 2008 (UTC)Reply
I have clarified the "Example" section and renamed it to "Motivation". The "Example" heading gave the impression that this is an example of a non-standard calculus argument, whereas in fact it is a "naive calculation" that can be formalised by non-standrd calculus (or indeed, taking a different route, by standard calculus). Gandalf61 (talk) 09:38, 10 September 2008 (UTC)Reply
An encyclopedia article can be directed toward both a general user and a specialist by having general summaries and examples at the beginning, then get more technical later. When I came across “nonstandard calculus” on a talk page elsewhere, I was curious as to what it was. I do probability and struggled through a nonstandard analysis book back when I was a teenager, so I am a very slight notch above a general reader. The Algebraist comment indicates that I did not understand what the symbols in the example referred to, which it appears is presumed by specialist readers who came here after knowing the subject, not before. It may be impossible to write an article on the subject for a general user like myself, but if it is not, then the pre-technical example should be defined and more readable. EricDiesel (talk) 16:38, 10 September 2008 (UTC)Reply
Perhaps the comments at standard part function and Bishop-Keisler controversy (namely, Bishop's quotes from Keisler) could be helpful. Katzmik (talk) 12:08, 11 September 2008 (UTC)Reply
Thnx, I did, but following links to catch up with definitions, just to read the motivating example, is a bit tricky, for me at least. I went to this article, after seeing it mentioned on the talk page discussing, of all things, Sarah Palin's church, thinking I would quickly get an idea of what nonstandard calculus was about. (I was a friend of Alonzo Church when I was a teen, and had trouble reading the nonstandard analyisis book he suggested when I could not follow arguments in the most basic physics class, never getting their intuition that allowed dividing by zero sized infinitesmals at the right time, an intuition I utterly lacked.) Am I correct from concluding from my scan of Wikipedia articles that nonstandard calculus is to be read after nonstandard analysis, a reversal from what one might have otherwise expected? If so, mention of this might be of help to the general reader, who would expect the opposite. EricDiesel (talk) 19:15, 13 September 2008 (UTC)Reply
You are correct in surmising that non-standard calculus is a work in progress :) Katzmik (talk) 11:55, 14 September 2008 (UTC)Reply
Incidentally, Keiler's book is available online in pdf (see his homepage). He is an excellent expositor and his book is the best introduction to non-standard calculus. Furthermore, the book has an "epilogue" explaining some of the theoretical underpinnings of the subject. Once you master this material, you can make non-standard calculus what it should be! :) Katzmik (talk) 12:35, 14 September 2008 (UTC)Reply

infinity category

edit

Hi Arthur, Thanks for your edits. I see that you removed the "infinity" category at the bottom of the page. I thought Robinson's infinitesimals were a hands-on way of working with infinity. Why do you feel such inclusion is inappropriate? Katzmik (talk) 15:40, 17 September 2008 (UTC)Reply

I'm not sure what the intent of Category:Infinity is exactly, but I didn't think it fit. I still don't, but perhaps it could be in a logical subcategory. — Arthur Rubin (talk) 17:47, 17 September 2008 (UTC)Reply
Infinitesimal is in the category, and probably should not be; Category:Infinitesimals as a subcat might satisfy both of you. Septentrionalis PMAnderson 22:14, 17 September 2008 (UTC)Reply

Quantifier complexity

edit

The discussion of non-uniform continuity ended, in an earlier version, with the following comment:

By way of comparison, in the standard approach one must show that there exists an ε such that for every δ there exist x and y such that |x − y| < δ while |ƒ(x) − ƒ(y)|>ε. This type of calculation is generally considered more difficult due to its quantifier complexity, see comment in Kevin Houston's book.

This comment has been removed by another editor, who has not responded to my request for a clarification. I would like to open a discussion as to the appropriateness of comparing the non-standard and the standard arguments in such a fashion. Katzmik (talk) 13:21, 30 October 2008 (UTC)Reply

I'd give Thenub a day or so to respond. I think the removed text was OK, although we really should include the page numbers for the Houston reference. I am curious what argument Fitzpatrick makes, but I don't have the book. Thenub, can you sketch it here? — Carl (CBM · talk) 13:30, 30 October 2008 (UTC)Reply
A proof is given at Uniform continuity#Properties, which might be the same one. Katzmik (talk) 14:14, 30 October 2008 (UTC)Reply

Sure, I can sketch it. You can define uniform continuity in terms of sequences, basically for any pair of sequences (not necessarily convergent) xn and yn with xnyn→0, the ƒ(xn)−ƒ(yn)→0. To see this doesn't x2 you can take the sequences xn=n+1/n and yn=n, then (n+1/n)-1/n→0, but (n+1/n)2-n2=2+1/(n2) which does not tend to 0. Sorry in my delay of a clarification. It was on its way, just had a few other things to do this afternoon. Maybe I misread the way that the section ended, there could be some collision of what is meant by standard. I took the term standard to mean standard versus non-standard calculus, and I read "must" to mean that this is the only way to solve the problem. Which seemed a bit off to me since some authors of standard calculus texts choose a definition that also allows for easy calculations as in the non-standard setting. I feel Fitzpatrick's proof is more in the spirit of the proof on the this page then the proof at [[uniform continity. Thenub314 (talk) 14:53, 30 October 2008 (UTC)Reply

In a way, this is a powerful challenge to the non-standard approach. After all, hyperreals are defined in terms of sequences, with a LOT of additional material from logic, whereas here one just uses sequences. Once one develops an intuition for the hyperreals and is comfortable with infinitesimals, hyperintegers, etc, the "points" of R* seem more elementary than "sequences" of R. However, this will not convince someone who has not yet developed such an intuition... I hope there is a better answer, will keep thinking. Katzmik (talk) 15:03, 30 October 2008 (UTC)Reply
I am not attempting to issue a challenge. Presumably, one needs to study non-standard calculus before they get further in non-standard analysis. And what question are you trying to answer? Thenub314 (talk) 12:50, 31 October 2008 (UTC)Reply
Thenub, thanks. That all makes sense. I had been blinded by the formal definition and didn't think about the sequential alternative definition. — Carl (CBM · talk) 13:00, 31 October 2008 (UTC)Reply
The sequential definition does involve quantification over sequences, which puts it outside of first-order logic. I think its effectiveness in teaching undergraduates requires proof. Katzmik (talk) 11:25, 27 November 2008 (UTC)Reply
I am not sure what sort of proof would be appropriate. All I can say is that I took the example from an undergraduate text used at a university where I taught. Thenub314 (talk) 12:59, 16 February 2009 (UTC)Reply

Confusing phrase.

edit

I am a bit confused by phrase:

While the thrust of Robinson's approach is that one can dispense with the limit-theoretic approach using multiple quantifiers,...

I think of the thrust of an argument to be it's most powerful part. And I think of Robinson's approach as, his approach to defining infinitesimals. So at this point I am a bit confused, because I don't think of Robinson's work has having much to do with multiple quantifiers. I think what is meant here is:

"The main advantage to teaching infinitesimal calculus at in introductory level is that one can dispense with the limit-theoretic approach using multiple quantifiers,..."

But I am not sure, so I thought I should ask before I changed anything. Thenub314 (talk) 14:45, 31 October 2008 (UTC)Reply

Limit of a sequence.

edit

We claim in limit of a sequence in the non-standard setting is quantifier-free. But in the definition we have the phrase "for every hyperinteger n". Am I missing something? Thenub314 (talk) 14:51, 31 October 2008 (UTC)Reply

Good point. — Arthur Rubin (talk) 18:09, 31 October 2008 (UTC)Reply
Looks like it has been taken care of by CSTAR. Thanks! Thenub314 (talk) 08:06, 1 November 2008 (UTC)Reply

Cauchy Integral?

edit

In the basic theorems section we talk about Cauchy integrals and Cauchy sums. I am unfamiliar with these phrases and it seems a lot like we mean Riemann integral and Riemann sum. Thenub314 (talk) 14:57, 31 October 2008 (UTC)Reply

Intermediate value theorem

edit

We prove this statement but something seems missing in the proof to me. There is something missing in the proof here. Why is ƒ(c)=0? And where did we use the continuity of ƒ?

How does this reduce the complexity of the the following proof in the standard setting?

Let   This exists because the set is bounded above by b and is non-empty since f(a)<0.

Finally what is the quantifier complexity of a proof as opposed to a statement? Thenub314 (talk) 08:20, 1 November 2008 (UTC)Reply

We have f(xi)>0, while f(xi-1)<=0. Since f is continuous and xi is infinitesimally close to xi-1, we must have that f(st(xi)) is a real number infinitesimally close to both positive and negative hyperreals, and so zero. This is no simpler than (and indeed practically indistinguishable from) the standard least-upper-bound proof. I do not know what is meant by saying it has lower quantifier complexity. Algebraist 09:04, 1 November 2008 (UTC)Reply
This is how I thought it would go. In that case I will incorporate your conclusion to the proof, and remove the business about quantifiers. Though, if there is no reason to compare to the standard proof, do you think it is worth including this proof?
Although I agree that the non-standard proof may have little advantage over the standard one, in my opinion, it merits inclusion --CSTAR (talk) 18:12, 1 November 2008 (UTC)Reply
I am not opposed to putting it back. Do you think we should include it under "Basic Theorems"? Thenub314 (talk) 20:14, 1 November 2008 (UTC)Reply

Courant on infinitesimals and befogging

edit

Courant wrote on page 81 of differential and integral calculus, vol I as follows:

We must, however, guard ourselves against thinking of dx as an "infinitely small quantity" or "infinitesimal", or of the integral as the "sum of an infinite number of infinitesly small quantities". Such a conception would be devoid of any clear meaning; it is only a naive befogging of what we have previously carried out with precision."

Katzmik (talk) 13:44, 11 January 2009 (UTC)Reply

So? Algebraist 14:01, 11 January 2009 (UTC)Reply

Here is more on page 101:

We have no right to suppose that first Δx goes through something like a limiting process and reaches a value which is infinitesimally small but still not 0, so that Δx and Δy are replaced by "infinitely small quantities" or "infinitesimals" dx and dy, and that the quotient of thesse quantities is then formed. Such a conception of the derivative is incompatible with the clarity of ideas demanded in mathematics; in fact, it is entirely meaningless. For a great many simple-minded people it undoubtedly has a certain charm, the charm of mystery which is always associated with the word "infinite"; and in the early days of the differential calculus even Leibnitz [sic] himself was capable of combining these vague mystical ideas with a thoroughly clear understanding of the limiting process. It is true that this fog which hung round the foundations of the new science did not prevent Leibnitz [sic] or his great successors from finding the right path. But this does not release us from the duty of avoiding every such hazy idea in our building-up of the differential and integral calculus.

Katzmik (talk) 14:08, 11 January 2009 (UTC)Reply

Why are you posting this here? Algebraist 14:10, 11 January 2009 (UTC)Reply
It's troubling that the paragraph on "some mathematicians" prior to Robinson quotes six fragments from Courant and none other. It seems intended to explain the setting for rigorous non-standard work.
I have boldly written "naive and vague or meaningless" which better fits the Courant six-pack. There is no mention 'mystical' or 'mysticism' elsewhere in the article either. --P64 (talk) 18:08, 21 April 2011 (UTC)Reply
Various other luminaries have vilified infinitesimals before and after Courant. Take your pick: D'Alembert, Cantor, Errett Bishop. Tkuvho (talk) 20:48, 21 April 2011 (UTC)Reply
Perhaps someone else is able to pick one or two wisely and provide references. Still, no form of 'myst*' should appear in the lead without specific support.--P64 (talk) 00:29, 22 April 2011 (UTC)Reply
In response to your first comment, d'Alembert described infinitesimals as "quackery", and Cantor described them as an "abomination" and "cholera bacillus of mathematics". I don't really understand your second comment. Courant is the one who used the term "mystical". We don't have to agree with Courant to quote him. Insn't he a legitimate secondary source? Tkuvho (talk) 04:57, 22 April 2011 (UTC)Reply

subsection "Robinson's argument"

edit

In the current version of the page, the second section is entitled "Robinson's argument". The section is rather technical. Moreover, the construction presented here is only one possible construction of the hyperreals. For example, Keisler uses a different construction. All such constructions are excellent, but I wonder if it would be appropriate to present material that's more basic at this stage. A reader confronted with infinite lists of axioms may not be motivated to continue reading. I assume part of our purpose here is to serve the larger public. I think detailed logical constructions are just as inappropriate at an early stage of non-standard calculus as equivalence classes of Cauchy sequences would be at an early stage of calculus. Katzmik (talk) 14:46, 11 January 2009 (UTC)Reply

I tend to disagree. It was the description first given in my undergraduate course on the subject. The course was desgined for non-math majors and was geared for many people who have not had calculus. The idea is intuitively simplier, if technically more difficult, then many other treatments of the subject. Thenub314 (talk) 15:31, 11 January 2009 (UTC)Reply
That's interesting. Let's see what other editors think. Incidentally, I did not mean to imply that this construction is worse than any of the others; I explicitly stated they are all excellent. I had the feeling any details of a construction at this point are somewhat premature. Was this description given in the first week of classes in your undergraduate course?? Katzmik (talk) 15:39, 11 January 2009 (UTC)Reply
See Dacu's comments below for an illustration of what sort of misconception I have in mind. Katzmik (talk) 19:10, 14 January 2009 (UTC)Reply

Lead is over using quotations

edit

It seems to me the new lead section is overusing quotes a bit. I think it should be possible to paraphrase Courant. Thenub314 (talk) 16:05, 11 January 2009 (UTC)Reply

I have relegated Courant to a footnote. Paraphrase may be a better solution. Clearly, too, another source is necessary in order to complete the paragraph adequately. --P64 (talk) 18:50, 21 April 2011 (UTC)Reply

copied from a talk page

edit

You write "Believe it or not, there is a solution to the problem -- non-standard calculus."

I don't recall making such a comment on any of the wikipedia pages. Perhaps I made it on one of the talk pages? I do not recall the context. Katzmik (talk) 11:42, 27 November 2008 (UTC)Reply

There is no problem! Cauchy put the definition of limit on a fully satisfactory foundation. Because this is based on the standard definition of the real numbers, it is widely accepted among mathematicians (like myself).

I am familiar with non-standard calculus (e.g., the book by the Henles). But it addresses a different problem (viz., Find an axiom system in which infinitesimals are well-defined), and so it is based on different axioms from ordinary analysis.Daqu (talk) 10:02, 3 November 2008 (UTC)Reply

I am very busy right now but if you read through a few paragraphs of Non-standard analysis it might help clarify things. Katzmik (talk) 11:42, 27 November 2008 (UTC)Reply
No, thank you. I was referring to this comment, which Wikipedia claims you placed on my User talk page:
"Hi, I just noticed your comment at the talk page of limit of a function concerning the difficulty of the definition. I think your point is well taken. Believe it or not, there is a radically simple solution to the problem; see non-standard calculus. Katzmik (talk) 11:46, 23 October 2008 (UTC)"
It is to this comment that my above comments apply.
(Also, may I suggest not interposing your comment in the middle of mine? Thanks.)Daqu (talk) 09:00, 11 January 2009 (UTC)Reply
I responded at your talk page. Katzmik (talk) 10:27, 11 January 2009 (UTC)Reply
You wrote: "In response to your comment at my talk page: there is a common misconception that non-standard analysis is based on a different system of axioms as compared to ordinary analysis. This is not the case. "
You're joking, right? I never said it was built on a different set theory. But it uses a number system with infinitesimals, whose existence is due to an axiom stating that they exist -- an axiom whose presence virtually defines what "non-standard analysis" means. (I didn't criticize non-standard calculus, but rather stated that it did not solve the "problem" of defining a limit, because in my view there is no problem.) Anyway, I prefer my infinitesimals to be in the context of Conway's "surreal numbers", which doesn't merely make infinitesimals axiomatic, it constructs them.Daqu (talk) 07:16, 12 January 2009 (UTC)Reply
No I am not joking, NSA constructs them in the context of ZFC without introducing any new axioms, and in this sense it is a "conservative theory" (see Criticism of non-standard analysis). Meanwhile, the surreals lack the transfer principle, which is what makes infinitesimals relevant to calculus. Katzmik (talk) 10:24, 12 January 2009 (UTC)Reply
Re whether "non-standard analysis is based on a different system of axioms as compared to ordinary analysis," this is certainly true if you compare an axiomatic development of the reals with an axiomatic development of the hyperreals. How could it not be that way? If you could do a complete axiomatic treatment of both systems, and the axioms were all the same, then the two systems would be isomorphic. Likewise if you do an axiomatic treatment of the reals, then there's an axiom that states that 1 exists, but 1 is never constructed explicitly.--76.167.77.165 (talk) 23:10, 15 March 2009 (UTC)Reply

Terminology

edit

Is "pointwise cluster" standard terminology? Thenub314 (talk) 19:05, 14 January 2009 (UTC)Reply

I don't know about pointwise but cluster has the advantage over the other term in that it is self-explanatory (I was not familiar with the other word until you introduced me to it, whether in mathematics or outside). Katzmik (talk) 19:09, 14 January 2009 (UTC)Reply
I've haven't heard this term before and my copy of Keisler's book doesn't mention it in the index. Maybe Katzmik meant monad. Also what's the difference between  ? Finally, it seems to me there are some quantifiers missing in the latest edit on uniform continuity (e.g., what's y quantified over?). I think the definition was right before. --CSTAR (talk) 19:14, 14 January 2009 (UTC)Reply
I mentioned below that Keisler routinely leaves out the quantifiers in this type of condition, when the domain is clear from context. Perhaps leaving out the domain of y is a bit artificial, but it makes the pointwise property much clearer, but you are the best judge. As far as clusters are concerned, they have been used informally though not by Keisler. Katzmik (talk) 19:23, 14 January 2009 (UTC)Reply
I was also puzzled by the two different symbols used for the "infinitely close" equivalence relation, namely, the   in the introduction and the   in the discussion of uniform continuity. If they are distinct,   should be defined or explained. If they are not,   should be replaced by  .Akvilas (talk) 16:50, 14 February 2009 (UTC)Reply
Keisler uses \approx, so I've changed all the \simeqs to that. Algebraist 16:58, 14 February 2009 (UTC)Reply

pair of hyperreals

edit

Keisler would unhesitatingly give the definition in terms of a single quantifier as I did. I think taking about a pair of hyperreals obscures the "pointwise" nature of the definition. There were a lot of scandalized comments at the talk page of uniform continuity until this sank in, and they were presumably by the experts, whereas the article is supposed to be generally accessible. Katzmik (talk) 19:13, 14 January 2009 (UTC)Reply

Here y cannot be interpreted as running over the standard reals (which is a source of concern for the latest edit), since for example if x is infinite the condition x\approx y will never be satisfied. Katzmik (talk) 19:27, 14 January 2009 (UTC)Reply

Intro needs rewriting

edit

Intro sections of wikipedia articles generally introduce the reader to the topic at hand. As of today the lead of this article is full of quotations which I imagine are very insightful to someone who already understands exactly what Non-standard calculus is. One major problem is all of the quotations, as here at wikipedia we tend to write articles in our own words. The second problem is the fact that the intro presupposes that the reader knows many things like who Robinson is, why he is relevant, and what is the distinction between non-standard and standard calculus is. I am no expert in the matter, and after reading the intro I think I have a fuzzy idea of what this article is about but I don't feel qualified to give the intro a complete rewrite like it needs. Thanks. mislih 14:27, 23 April 2009 (UTC)Reply

See also two sections above, on Courant's prejudice and lead quotations.
I have tried to take one giant step toward a good rewrite. The lead still gives undue weight to non-standard analysis, that is to modern infinitesimals rather than to their meaning for the calculus. It also relies too heavily on other articles about infinitesimals and epsilontics. Some of the point should be explained here.
Beside "vague and meaningless", I suppose the lead and its supporting quotations (now relegated to a footnote) should clearly make some point about "heuristic" and/or "naive" work in contrast to something else. --P64 (talk) 18:55, 21 April 2011 (UTC)Reply
There is some problem for me, here and in neighboring articles, because "the calculus [of infinitesimals]" and "[the] infinitesimal calculus" have been names for differential and integral calculus generally, regardless whether infinitesimals are used in the argument. (Our articles infinitesimal calculus and calculus both make that point.) That makes it tricky to write about the use or non-use of infinitesimals in the argument.--P64 (talk) 19:25, 21 April 2011 (UTC)Reply

Quantifier complexity again

edit

Two sections above may be relevant, Quantifier complexity and Intermediate value theorem. In particular someone has previously asked what is quantifier complexity of a proof.

Does quantifier complexity refer simply to the number of nested quantifiers? perhaps the maximum in a clause, maximum in a statement, maximum in a proof, maximum in textbook, whatever is said to be complex? Whatever the answer, it will be useful to provide a wikilink for "quantifier complexity" rather than simply "quantifier". --P64 (talk) 18:12, 21 April 2011 (UTC)Reply

There are actually two issues here, both the number of quantifiers and also alternations of quantifiers. The latter are thought to be particularly confusing to students. Tkuvho (talk) 07:57, 22 April 2011 (UTC)Reply

Motivation

edit

Refer to section Non-standard calculus#Motivation. Do standard and non-standard approaches agree on the use of either   or   to represent the numerator of the first term? If so then one or two more terms at the beginning of the "manipulations" will be useful. (I suspect that y and   do not belong anywhere in this section, without explanation of what I am missing.) --P64 (talk) 00:25, 22 April 2011 (UTC)Reply

Well, Δy is better than Δf(x) which is more awkward. Tkuvho (talk) 07:58, 22 April 2011 (UTC)Reply
I have experimented with spacing in order to help "Compare selected revisions" show my revisions more effectively. For that purpose only, select the 19442-byte version.
The experiment was only a modest success. The comparison tool works well only when there is no change in the number and order of paragraphs or in the use of whitespace that is not displayed (by my browser). --P64 (talk) 16:49, 22 April 2011 (UTC)Reply

Error term

edit

Berkeley talked about lots of errors but not about the error term that I could see. The error term may be an infinitesimal but not necessarily if there isn't a derivative. I don't believe the business about Berkeley referring to the error term should be added to this article. Dmcq (talk) 14:42, 30 May 2011 (UTC)Reply

The "error term" is just the standard mathematical expression for whichever term clutters up the formula you want to obtain at the end. If you find references to "error" objectionable, we could go with different terminology: Berkeley criticized dropping the terms at the end of the calculation that were previously assumed nonzero." Tkuvho (talk) 14:46, 30 May 2011 (UTC)Reply
He did talk a bit about the remainder and discarding ab when multiplying (A+a) by (B+b) but the main comment about that was "Nor will it avail to say that ab is a Quantity exceeding small: Since we are told that in rebus mathematicis errores quam minimi non sunt contemnendi". Dmcq (talk) 15:50, 30 May 2011 (UTC)Reply
It's the same criticism. The term dt cannot be dropped even though it is exceedingly small. Tkuvho (talk) 16:13, 30 May 2011 (UTC)Reply
I agree with Dmcq, the business about error terms does not belong in this article. I particularly object to using the phrase "ghost of departed quantities" in relation to error terms. Since at that particular part of the critique he definitely was not discussing error terms. Thenub314 (talk) 17:16, 30 May 2011 (UTC)Reply
If he was not criticizing the disappearance of the higher order terms, what was he criticizing? As I mentioned, I was using the expression "error term" in its modern mathematical sense, equivalent to higher order terms. Tkuvho (talk) 18:50, 30 May 2011 (UTC)Reply
In the ghosts of departed quantities part he was criticizing treating two values of the x axis as different when determining fluxions and then treating them the same in the result. He said either they're different or they're the same. The ghost of a departed quantity was the infinitesimals used in the calculation of the fluxions. They had to be different to make the division meaningful and yet they had to be the same to be the tangent at a point. Dmcq (talk) 19:31, 30 May 2011 (UTC)Reply
Berkeley could be referenced but it should be to the right place, the bit about the remainder and compensating errors and the quote from Newton is the bit which is relevant to taking the standard part. The evanescent bit is relevant to where Δy is divided by Δx and they become infinitesimals. Dmcq (talk) 19:37, 30 May 2011 (UTC)Reply
Whether Δx is an infinitesimal or a vanishing increment is a separate issue. Today we know that both approaches can be implemented mathematically to our satisfaction. In either case, we have to get rid of the Δx at the end. The Δx is the difference between x and z mentioned by Berkeley earlier in the paragraph. The paradox of x being at the same time equal to and not equal to z is the same paradox as Δx being zero and nonzero at the same time. Tkuvho (talk) 05:12, 31 May 2011 (UTC)Reply
The main thing is that Berkeley was identifying two different problems in these two sections, not one. Dmcq (talk) 07:46, 31 May 2011 (UTC)Reply
OK. What are the two problems? Incidentally, we are talking about a single section, namley section XXXV. Tkuvho (talk) 09:32, 31 May 2011 (UTC)Reply
XXXV only deals with Δx not being zero when determining fluxions but having to be zero if it applies to a single point, i.e. Δx being an evanescent increment. It does not deal with the remainder term. Dmcq (talk) 17:30, 31 May 2011 (UTC)Reply
Paragraph XXXXV specifically talks about dropping the higher order term at the end of the calculation, in the following terms: Divide now zzz xxx by z x and the Quotient will be zz +zx+xx: and, supposing that z and x are equal, this same Quotient will be 3xx which in that case is the Ordinate, which therefore may be thus obtained independently of Fluxions and In�nitesimals. But herein is a direct Fallacy: for in the first place, it is supposed that the Abscisses z and x are unequal, without such supposition no one step could have been made; and in the second place, it is supposed they are equal; which is a manifest Inconsistency. Note that he uses the terms "fallacy" and "inconsistency", which are quite a bit stronger than paradox. Tkuvho (talk) 17:53, 31 May 2011 (UTC)Reply
And where does it refer to an error term or a remainder in this? What it refers to is the error of supposing they are different and then supposing they are the same. If he wanted to refer to an error term or remainder he would have and does so explicitly in other sections. Dmcq (talk) 20:32, 31 May 2011 (UTC)Reply
We can certainly avoid using the terms "error" or "remainder" if it bothers you. However, the mathematical point is identical: x and z are "the same" if and only if their difference is zero! It seems that we are quibbling about technicalities. Tkuvho (talk) 03:07, 3 June 2011 (UTC)Reply

Remainder term

edit

This thread is getting unwieldy, so I am starting a new one. I don't really follow what you are saying. Apparently Berkeley is making the same mathematical point here, which is what he sees as the logical inconsistency of the procedure used in the calculus, where delta-x=z-x is nonzero at the start of the calculation, but is set to zero at the end. What do you see as too different criticisms here, either philosophically or mathematically? It seems that we are splitting hairs here. Do you have a source that there are two separate criticisms of the logic of the dy/dx calculation? Tkuvho (talk) 03:42, 1 June 2011 (UTC)Reply

The criticism is the same in that he is complaining about starting with one hypothesis, getting a result and then changing the hypothesis. That does not mean that when he was referring to the ghosts of departed quantities he was referring to the error term or remainder in his discussion about compensating errors. If you look in Sherlock Holmes in Babylon: and other tales of mathematical history By Marlow Anderson by Robin Wilson you'll find he discusses fluxions as proportional to nascent increments in one place and he discusses the derivation of the increment of a rectangle AB in another paragraph and then he talks about the last and most fundamental of Berkeley's criticisms about changing the hypothesis afterwards. There is not just one criticism, there is a few different ones. By the way he discusses also that some mathematicians around then talked about infinitesimals as fluxions which might explain a confusion I saw about the evanescent increments was fluxions, it is pretty clear Berkeley was referring to an actual velocity which was Newtons ultimate ratio. Dmcq (talk) 20:44, 1 June 2011 (UTC)Reply
Anderson and Wilson are editors of this collection. Which essay are you referring to exactly and by whom? Both Jesseph and Sherry identify what they call the logical criticism and the metaphysical criticism. They are among the leading experts on Berkeley. They describe the logical criticism as dx being nonzero at the start and zero at the finish. Tkuvho (talk) 13:19, 2 June 2011 (UTC)Reply
I haven't read Jesseph in his entirety yet, but he definitely discusses other logical criticisms by Berkeley then just assuming something is non-zero and later assuming it is zero. For example, he starts by discussing Berkeley's logical criticism of the product rule, in which he says that Berkeley correctly points out that Newton is taking the wrong difference quotient. (The difference quotient Newton takes is akin to  ) There is more then just one logical criticism contained in The Analyst. So I would like to offer the following correction: "They describe the [a] logical criticism ..." Thenub314 (talk) 22:47, 2 June 2011 (UTC)Reply

every real number is standard

edit

There is no need to mention "standard real number", as every real number is standard. In Nelson's approach, you have standard and non-standard real numbers. The notion of "non-standard real numbers" is a colloquial short-cut and is always in the plural. Tkuvho (talk) 03:04, 3 June 2011 (UTC)Reply

We have been through this before I have given you examples of papers where people use phrases like "for a non-standard real number x". Taking it out further is just you enforcing your POV that it is a nonsensical statement. Thenub314 (talk) 03:59, 3 June 2011 (UTC)Reply
In Nelson's framework, you can have a nonstandard real number. This page is not working with Nelson's framework. I have already analyzed the references you provided and showed them to be unconvincing. This, Tao always uses the phrase in the plural, as I mentioned above. It may be appropriate in a blog page. Tkuvho (talk) 05:05, 3 June 2011 (UTC)Reply
I agree with Thenub314 on this issue. In Nelson's framework, we have non-standard real numbers. In other frameworks, "non-standard real number" is often shorthand for a member of the "non-standard real numbers" which is not a (standard) real number, which comes to the same thing. — Arthur Rubin (talk) 06:59, 3 June 2011 (UTC)Reply
The plural is used as an informal shorthand, as you say. I think the singular is rarely used. I suggest we follow a standard textbook such as Goldblatt, rather than informal shorthands on blog pages. Tkuvho (talk) 07:35, 3 June 2011 (UTC)Reply
A standard textbook on non-standard analysis? As I went to school before non-standard analysis became a subject for standard textbooks, I can't confirm textbooks, but the term is used in expository papers on non-standard analysis, such as in the American Mathematical Monthly. — Arthur Rubin (talk) 17:07, 3 June 2011 (UTC)Reply

recent unhelpful edits

edit

Courant's comments are properly sourced and there is no reason to delete them. If necessary, secondary literature can be provided that makes note of Courant's criticism.

O'Donovan's claim that students may get the impression that f(x+e)=f(x) is unbelievable and reflects the fact that his piece was not properly refereed, as I already mentioned. To the extent that the background mathematical theories are strictly equivalent, I can't see what relative analysis can improve in this area.

Every real number is standard. There is no need therefore to say "standard real number".

I strongly encourage user Thenub to discuss any proposed changes at talk first. Tkuvho (talk) 05:03, 3 June 2011 (UTC)Reply

It is your opinion that every real number is standard. Other people, for emphasis, use the phrases "for a standard real number x". Again, this is you enforcing your POV. Please stop. You cannot remove both O'Donovan's and Hbracek's references because you feel they are unreliable seems absurd. (Your missing some prime's in the statment you claim is unbelievable). Now the acknowledgements section of the book claims the papers were peer reviewed. So I am assuming your claiming the papers were not peer reviewed well enough?
Finally if there is ever a point where several editors request that I make changes to a talk page first I will, but while it is just you and me disagreeing I don't see that option as appropriate or equitable. Thenub314 (talk) 06:38, 3 June 2011 (UTC)Reply
It is well known that while most conference proceedings claim they are peer-reviewed, much of the time this is not done seriously. If you ask anyone who has been to a faculty meeting, she will tell you that publications in conference proceedings count far less than regular publications. I have seen a case were a promotion was denied because many of the candidate's papers were in conference proceedings. At any rate, I don't see how students would get the impression that f(x+e)=f(x) in Keisler's framework, any more than in Nelson's framework. Tkuvho (talk) 07:39, 3 June 2011 (UTC)Reply
OK, incremental it is. First, let me say not f, f' ! Think about it, it is the standard part sitting in front of the difference quotient. If x is standard, (and f) then f'(x) will be a standard number. On the other hand what is f'(x+e)? By definition it is
 
which will also be a standard number that will equal f'(x). So f'(x+e)=f(x) in the ordinary sense, and it is not be an infinitesimally change in the function f'. Yes I know this argument is wrong, but he is claiming it is the kind of thing students get confused about. They need to know when the should/shouldn't be extending functions applying transfer principles etc. If a student is thinking about second derivatives it is easy to see how they might stumble onto this and get confused. So I believe this could be a pedagogical problem. Thenub314 (talk) 09:13, 3 June 2011 (UTC)Reply
As I think I mentioned elsewhere, the definition is supposed to define a real function f'. Then, by applying the extension principle described in detail in the previous chapter in Keisler's book, we extend f' to its extended domain. Tkuvho (talk) 12:53, 19 December 2011 (UTC)Reply
But the definition above, because it directly mentions "st", is not a candidate for the transfer/extension principle. Keisler does not, as far as I can tell, give any detail about which sentences can be transferred, just calling them "real statements", but things that include "st" certainly are not "real statements" (otherwise "x = st(x)" would transfer :) )
This is really the issue that Hrbacek is getting at. Keisler's definitions are only for standard inputs; Keisler is not interested so much in calculating things at non-standard inputs. On the other hand Hrbacek is saying that if the point is to replace the real line with the hyper-real line (in particular, if the physical line is actually a hyper-real line) then it does not make much sense to only compute derivatives at standard inputs. — Carl (CBM · talk) 14:51, 19 December 2011 (UTC)Reply
Let us try to clear out the mathematics. The definition above is only supposed to apply at standard points. Once the values of f' at standard points are calculated using this definition, we get a new real function. Let us denote it g(x)=f'(x) for clarity. The new real function g has a natural extension g*. Of course, the way to calculate g* at a non-standard point is not by applying the standard part definition; that can only produce a real value. These issues are discussed exhaustively in the previous chapter in Keisler. Now from Hrbacek's point of view, all this is very unnatural because he wants all numbers to be real; the fact that f' is only defined partially seems unsatisfactory to him. This, however, is a philosophical objection that has more do with with Hrbacek's foundational stance than with merits of Keisler's book. I happen to think that the relevance of what Hrbacek wrote is marginal; at any rate, if we do include his comment, we should at the very least make sure to include enough information concerning the definitions that are available in Keisler, so as not to mislead the reader. Tkuvho (talk) 15:33, 19 December 2011 (UTC)Reply
Could you give a specific reference to the page where Keisler describes how to compute a derivative at a non-standard value of the input? The definition on page 45 does not do this. I only have access to the version of the book on Keisler's web site. — Carl (CBM · talk) 15:40, 19 December 2011 (UTC)Reply
Carl, I emphasized the point that he does not provide such a definition on page 45. The "extension principle" to the effect that every real function has a hyperreal extension is part of the foundational package assumed given in the hyperreal approach, just as a coherent system of the real numbers is assumed in the calculus: first year calculus proves the intermediate value theorem, but does not construct the number system where this theorem is valid). Applying the extension principle to the function g=f', we get its values at non-standard points. Tkuvho (talk) 15:44, 19 December 2011 (UTC)Reply
But how do you compute them? That is Hrbacek's criticism: that this definition does not actually give any information about how to compute the derivative. That is exactly the point that Hrbacek is making, so just pointing out the definition on page 45 doesn't serve as any sort of rebuttal to Hrbacek's point. What might serve that role would be a statement by Keisler about why he is not interested in computing derivatives at non-standard points. Hrbacek actually devotes more space to continuity, so it might be more useful to look at Keisler's definition of that on page 125, which also only works for standard points c. — Carl (CBM · talk) 15:55, 19 December 2011 (UTC)Reply
Good point; I corrected the page number. "a statement by Keisler about why he is not interested in computing derivatives at non-standard points" is not something I have available right now, but it seems obvious that the main objects of interest in the calculus are real inputs, outputs, derivatives, and integrals. Does this really need to be sourced? We could include a self-evident comment to the effect that in Keisler's approach, in calculus one is primarily interested in real objects and therefore values at nonstandard points are of secondary interest. Note that this arises even before one gets to derivatives and integrals: the value of the function f itself at a nonstandard point is not really something you can write down explicitly in every case. This is not going to be any different whether you are Keisler, Nelson, or Hrbacek. Tkuvho (talk) 16:12, 19 December 2011 (UTC)Reply
(←) The interest in only standard inputs is exactly the issue Hrbacek is getting at. I don't think we can get by with "self-evident", particularly given how assertive you have been about marking the material on Hrbacek's comments as "dubious". If the idea is to have some sort of response from Keisler to Hrbacek's comments, the response needs to be as well sourced as Hrbacek's paper.
If the physical line actually has infinitesimals (and Keisler says in the first chapter that the hyperreal line is a reasonable model for the physical line) then it makes sense to worry about calculus as non-standard values. For example if we use calculus for Newtonian mechanics then we want a velocity at all points in time, not just standard ones. For functions such as x2 we can certainly compute the value at any point just using the field operations, but as far as I can tell Keisler gives no method for computing the derivative at   for ε infinitesimal. This is exactly what Hrbacek is criticizing, so if Keisler does have a justification that would be worth including. But we can't include as a rebuttal the very definitions that Hrbacek is criticizing. — Carl (CBM · talk) 16:23, 19 December 2011 (UTC)Reply
In principle I agree with what you are saying, but my objection all along has been that the current comment on Hrbacek: On the other hand, Hrbacek writes that the definitions of continuity, derivative, and integration in non-standard analysis implicitly must be grounded in the ε-δ method in order to cover all values of the input, including nonstandard values. Thus, Hrbacek argues, the hope that non-standard calculus could be done without ε-δ methods can not be realized in full does not make it that clear that the definition at standard points is avaible. It only alludes to this negatively. I don't view the page numbers as a rebuttal but rather a clarification. Tkuvho (talk) 16:28, 19 December 2011 (UTC)Reply
Concerning Newtonian calculus: I think applications in physics is the best proof that we don't really care about non-standard points, certainly at the level of classical physics. Keisler merely points out on page 25 that the hyperreals are a useful model of a line in physical space. Both the hyperreal line and the real line are certain idealisations, and as Keisler points out "we have no way of knowing what the line in physical space really looks like". Perhaps an argument could be made that in the case of the Dirac delta function, one might be interested in values at nonstandard points; but making such a claim for Newtonian mechanics seems far-fetched. Hrbacek has a stronger argument on purely mathematical grounds than with regard to physical applications. At any rate, I am not denying the validity of his viewpoint, merely pointing out that the reader should be aware of the fact that its relevance to this page is limited. Tkuvho (talk) 16:49, 19 December 2011 (UTC)Reply
How is it that the relevance is limited. The comments are precisely about developing calculus using non-standard analysis, which I believe is the same subject of this page. Rather than giving a list of page numbers from some specific text, I think it might be better to phrase the comments about Hrbacek so they make clear the non-standard definition is valid at standard points. Perhap....
On the other hand, Hrbacek writes that it is a basic and useful fact that the the definitions of continuity, derivative, and integral are defined for both standard and non-standard points. But any attempt to give a definition of these notions which is valid for all both standard and non-standard inputs must be grounded in the ε-δ method. Thus, Hrbacek argues, the hope that non-standard calculus could be done without ε-δ methods can not be realized in full
What do you think? Thenub314 (talk) 17:39, 19 December 2011 (UTC)Reply
This seems like a step in the right direction. try shorter sentences, and include a clause to the effect that "while Keisler does provide definitions of continuity, derivative, and integral at standard points using infinitesimals to the exclusion of epsilons, etc." Tkuvho (talk) 17:42, 19 December 2011 (UTC)Reply
Well this page is not about Keisler's book, there are several books on non-standard calculus and Hrbacek's article mentions a few of them. It seems out of place to discuss Keisler specifically in this context. Thenub314 (talk) 17:46, 19 December 2011 (UTC)Reply
Great! All of those books provide definitions at standard points without epsilons. Tkuvho (talk) 17:48, 19 December 2011 (UTC)Reply
This is exactly the issue Hrbacek is criticizing: giving defintions only for standard points. The terminology "Real objects" is not so good as a term, because if the physical line is hyperreal then infinitesimals are also "real objects" that are just as valid as non-infinitesimals as subjects of calculus. This seems to be at the root of Hrbacek's complaint: if infinitesimals are also real objects, what is the purpose of only studying the standard ones? Does Keisler say anything about this in his book? — Carl (CBM · talk) 18:29, 19 December 2011 (UTC)Reply
So this conversation seems to have stalled. Currently we do not mention Hrbacek's criticism anywhere withing the article, are there objections to including it? Thenub314 (talk) 20:00, 28 December 2011 (UTC)Reply

*f vs f*

edit

Looking at the non-standard analysis article, we are tending to put the stars on different sides of the function. Is there any feelings that we should standardize, and if so to which. My personal feeling is that consistency is good if there is consensus for it, but I have no feelings one way or the other about which way is preferred. Thenub314 (talk) 17:30, 7 June 2011 (UTC)Reply

original research

edit

What is original, in Thenub's opinion, about the uniform continuity section? Tkuvho (talk) 18:56, 6 December 2011 (UTC)Reply

Sorry for not replying earlier, just noticed this comment. For example the third and fourth paragraphs. Followed by the examples that are claimed to support the fourth paragraph. It is claimed the examples have somethingto do with the number of variables, but I do not see how. Thenub314 (talk) 21:34, 18 December 2011 (UTC)Reply
I am not sure if your question is mathematical or stylistic, but let me try this: the traditional definition of uniform continuity requires a pair of variables ranging through the domain of the function. Meanwhile, the hyperreal definition exploits a single variable, but one ranging through a larger domain, namely the natural extension of the domain of the original real function. Does that help? Tkuvho (talk) 12:51, 19 December 2011 (UTC)Reply
I did not have a question, but I will say that the above doesn't really help. In every formulation I have seen you need to say, in one way or another, that whenever   then  . This seems to be a pair of variables ranging over the extended domain. Thenub314 (talk) 17:44, 19 December 2011 (UTC)Reply
"microcontinuity at every point of the extended domain" only uses one. You can phrase it differently but then you lose the impact. Note that the epsilontic definition cannot be phrased in terms of a single variable, unless you resort to second order logic. Tkuvho (talk) 17:47, 19 December 2011 (UTC)Reply
Well if we can reference the proof of the fact that this cannot be done in the real numbers without going to a second order system, then perhaps it is worth including. Thenub314 (talk) 23:06, 19 December 2011 (UTC)Reply
I am pretty sure it is in one of the recent articles. But the real issue is not whether it can or cannot be done; the situation on the ground is that it is never done. I think this can be taken for granted; any textbook will give you a definition exploiting a pair of variables, and many will elaborate that unfortunately this notion cannot be reduced to limits as ordinary continuity can. Tkuvho (talk) 10:48, 20 December 2011 (UTC)Reply
I disagree. I am not aware of any reference that suggests that calculus should be done strictly within first order logic, otherwise statements like "there exists a function whose derivative is not continuous" becomes quite difficult. So the relevance of the number of variables vs first/second order logic seems to be our argument, rather than something that is usually discussed. If it can be sourced it should, if not it should be taken out. Thenub314 (talk) 15:06, 20 December 2011 (UTC)Reply
I came across one reference at least that mentions this distinction between first order and second order logic is not relevant because the notion of a function already requires second order logic. Rather then add this reference it seems sensible to me to simple remove the discussion here, any objections? Thenub314 (talk) 21:13, 28 December 2011 (UTC)Reply
Your definition of uniform continuity using sequences begs the question, since to define what it means for a sequence to converge you will need epsilon-delta anyway. Your comment about functions is a misconception. "second-order" in this case refers to quantification over sets. A first-order formula involving a specific function is still a first-order formula. Tkuvho (talk) 08:45, 29 December 2011 (UTC)Reply
The statement that you seem to dispute: "the hyperreal definition is stated in terms of a single variable ranging through the domain, whereas the standard (ε,δ)-definition is formulated in terms of a pair of variables" is in fact incontestable since it specifically refers to the epsilon, delta definition, not to a sequential definition. Tkuvho (talk) 08:48, 29 December 2011 (UTC)Reply
I think you misunderstand me completely. It is the first part of the sentence I object to as not being meaningful. Here are some reasons: I spent some time checking references, none of them give a definition of uniform continuity in terms of "one variable". These references include:
  • Cutland "Nonstandard Analysis and its Applications"
  • Keisler "Foundations of Infinitesimal Caculus"
  • Pinto "Infinitesimal Methods for Mathematical Analysis"
  • Robert "Nonstandard Analysis"
  • Goldblatt "Lectures on Hyperreals"
  • Gordon, Kusraev, Kutateladze "Infinitesimal analysis"
  • Vakil "Real analysis through modern infinitesimals"
  • Väth "Non-standard Analysis"
So the claim that the definition of continuity in non-standard calculus is stated in terms of a single variable is tenuous at best. The only reference I found even approximating the micro-continuity version given above Gordon et al, who state as part of a theorem, and in the same sentence explain what this means in terms of a expression using two variables.
Now, a second aspect to my objection is that even if I were to grant it were the statement was factual and meaningful, it still seems to be synthesis. Wikipedia is the unique place I am familiar with making this claim. As far as I am aware there is no source which compares the number of variables used in the two definitions.
As a separate topic, the discussion of first vs. second order logic is a second place I feel this section could be improved. Your correct that I made a mistake, but to more precise he says: "In this sense the logical complexity of the [sequential definition of a limit] is higher than that of the [delta-epsilon definition]. This fact is irrelevant in analysis because the notion of function is also of 2nd order." I suggest that unless we have a references that says the order is relevant, we simply remove the discussion here rather then citing this reference as it will all be getting a bit off topic. Thenub314 (talk) 23:58, 30 December 2011 (UTC)Reply
In addition to the Gordon reference which is certainly fine and sufficient, the one-variable definition is discussed in detail in the recent article on Cauchy in Perspectives on Science. Feel free to add a footnote. Tkuvho (talk) 21:01, 31 December 2011 (UTC)Reply
The Gordon reference doesn't support the statement "the hyperreal definition is stated in terms of a single variable ranging through the domain" he doesn't what your suggesting as a definition and in the statment of the theorem he explicitly uses two variables. Which Perspectives paper are you referring to? If is it Cauchy's Continuum it doesn't discuss anything related to the hyperreal numbers that I see. Perhaps I missed something in a foot note, but if that were the case you probably wouldn't have said "detailed", could you be more specific?Thenub314 (talk) 23:13, 31 December 2011 (UTC)Reply
Perhaps you should switch to a different browser. I just did a search on the article and found at least 7 occurrences of "hyperreal". Tkuvho (talk) 08:08, 1 January 2012 (UTC)Reply
Your correct, I didn't continue on to read the appendix, which does mention it several times. On the other hand, if my browser search is to be believed, he doesn't mention uniform continuity in the appendix at all. I still do not see the detailed discussion about the definition of uniform continuity in terms of one variable in Non-standard calculus that you claim is in this paper. Thenub314 (talk) 17:08, 1 January 2012 (UTC)Reply
Looking over the recent literature, I see that even a better reference is Borovik et al., section 2.4. You'll find everything you need there. Tkuvho (talk) 17:22, 1 January 2012 (UTC)Reply
Again this is about what Cauchy was doing, not about what is done in non-standard calculus. Thenub314 (talk) 17:56, 1 January 2012 (UTC)Reply
My understanding is that the discussion here applies generally to a B-continuum. That term includes the hyperreals. The difference between an A-continuum and a B-continuum is explained in the appendix. Tkuvho (talk) 18:01, 1 January 2012 (UTC)Reply

Last Paragraph in Motivation Section

edit

I have issues with the new last paragraph in the motivation section. It is hard not to see this as a direct response to edits at Elementary Calculus and (ε, δ)-definition of limit, in which I included a reference which says something to the effect that "the hope that non-standard calculus could be done without ε-δ methods can not be realized in full." Now I understand Tkuvho disputes my reading of the resource, but adding unsourced statements to the contrary only serves to escalate tensions. Thenub314 (talk) 21:47, 18 December 2011 (UTC)Reply

I tried to source it in the book. Tkuvho (talk) 12:49, 19 December 2011 (UTC)Reply

madison and stroyan

edit

The following insightful comment by Madison and Keith Stroyan was recently deleted from the page:

Infinitesimals provide a method for calculating limits ... (Epsilon and delta methods require the answer in advance).

Tkuvho (talk) 13:58, 2 January 2012 (UTC)Reply