Talk:Ridders' method

Latest comment: 11 years ago by 68.232.116.79 in topic Order of convergence

More sources?

edit

I'd be very interested to learn about more sources about this method. It's not in any of the other numerical analysis books I looked in (that's why I weakened the claim that it is as good as Brent's method by attributing it explicitly to the Numerical Recipes people). Additionally, it seems that the method in Ritters' paper is not the same as the method in Numerical Recipes. Ritters fits the unknown function to   while Numerical Recipes fits it to  . -- Jitse Niesen (talk) 12:27, 25 April 2008 (UTC)Reply

I found this: Wolfram MathWorld, which lists these additional references:
  • Ostrowski, A. M. Ch. 12 in Solutions of Equations and Systems of Equations, 2nd ed. New York: Academic Press, 1966.,
  • Ralston, A. and Rabinowitz, P. §8.3 in A First Course in Numerical Analysis, 2nd ed. New York: McGraw-Hill, 1978.
Web pages about software packages, which also provide mathematical information (but might be based on Numerical Recipes?)
A Google Scholar search gives 19 hits for "Ridder's method", including for example Morsink et al. which in the Google scholar snippet says "Another improvement is the use of Ridder's method (see, eg, Press et. al. 1992) for locating the exact point along a sequence of ...", suggesting that at least one scientist has used the method as described in Numerical Recipes.
Coppertwig (talk) 12:00, 5 May 2008 (UTC)Reply
You're absolutely right, Jitse! The version in Numerical Recipes (NR) seems to me to be just plain wrong. It claims to be Ridder's method and cites the Ridder paper, but describes a totally different theory apparently, and if my calculations are right, the NR updating formula gives a number which is not only different from Ridder's hyperbolic or exponential formula, but which is also sometimes unreasonable and not very useful as an approximation, and which can also fall outside the segment of the number line boxed by the previous values, which NR claims can't happen. There's a 3rd edition of NR that came out in 2007; it would be interesting to know whether they've updated that.
So, what should we do? Rewrite this article based on the original article by Ridder? I did some web searches but didn't find anything pointing out this error. Coppertwig (talk) 21:58, 26 June 2008 (UTC)Reply
OK, I've rewritten the article based on Ridders' article. I'm not sure whether that's all that needs to be done. Coppertwig (talk) 02:11, 28 June 2008 (UTC)Reply

The discrepancy arose because Jitse Niesen added a reference to the wrong paper by Ridders. I've changed it to the correct one, i.e. the one referenced by Numerical Recipes, which is pages 979-980 not 669-670 . I'll revert the article to the version that described that algorithm, i.e. the same algorithm in Numerical Recipes.. Qwfp (talk) 14:32, 29 October 2011 (UTC)Reply

Current method incorrect?

edit

The currently given method,

 

seems incorrect to me. I had trouble using the above expression in my Fortran implementation, with   frequently falling outside the interval  . Obtaining a root approximation within these bounds and closer to the root than   is essential to achieving a robust improvement over the "brute force" bisection method. Hence I compared it to Ridders' original work and found that he gives two subtly different expressions (using here the same notation as in the wiki article):

 

or, without the sign function:

 

so it seems the Numerical Recipes version is indeed incorrect. Using either of the above one no longer obtains any   outside  . Of course, one then has to determine on which of the three intervals between the old bounds the root is located and set the bounds for the next iteration accordingly. 86.95.197.9 (talk) 00:41, 19 February 2012 (UTC)Reply

The first two expressions you give should be equivalent given that x1 and x2 are required to bracket the root, i.e. if f(x1) < 0 then f(x2) > 0 and vice versa. This means that sign[f(x1) – f(x2)] must be the same as sign[f(x1)]. (So why Numerical Recipes prefer the former slightly more complex expression to the latter is beyond me — possibly as it allows them to trace who's copied them??) Qwfp (talk) 14:15, 19 February 2012 (UTC)Reply
I suppose that is true, but also see no merit in the superfluous arithmetic. In fact, upon looking more closely, the Numerical Recipes code also includes a check to verify that  , which is always true (unless of course   is exactly the root ;-). It also relies on a "highly unlikely value" to simplify logic, which I think is a bit dirty and can be avoided easily. And it uses a for-loop with an arbitrary limit (60), which is not guaranteed to be sufficient as this depends on the initial interval length and desired accuracy. Apart from those peculiarities it appears to be a valid implementation. However, both webpages Mathematical background and Ridders Zero Finder linked to above are in error, because they only consider   as a new interval limit. This is ill-advised, since Ridders' method relies on providing on average narrower intervals than the bisection method. Ignoring   is dangerous because it deteriorates performance, but may go undetected without testing, especially when setting a high iteration limit. 86.95.197.9 (talk) 00:09, 20 February 2012 (UTC)Reply
I just implemented this independent, based on the wiki formula, and recovered the same answer in the first example as Ridder presents, for what it's worth.

Order of convergence

edit

Ridder's original paper is a little incomplete on the calculation of order of convergence. Taking his result 1-step further, I find  , from which the only real solution is 1.83928675521416. So not quite second-order, exactly the same order of convergence as Muller's method, though the implementation seems much simplier. Anybody know the answer to which is faster? — Preceding unsigned comment added by 68.232.116.79 (talk) 17:47, 17 June 2013 (UTC)Reply