Talk:Shanks transformation

The "Generalized" Shanks Transformation is the Original

edit

This article presents the "Shanks Transformation" as basically the Aitken delta-squared process, with a "generalized" one being an afterthought on the page.

However, the transformation that Shanks created in his original paper is the thing this article calls the "generalized" one. Shanks wrote explicitly from the very beginning that the entire thing includes the Aitken delta-squared process as one particular instance, but also includes the entire Pade table, and goes on even beyond that.

The literature, typically, uses the term "Shanks transformation" to talk about the generalized version, often the "once-iterated, nth-order" version. In particular, the mth term of the nth-order Shanks transformation gives the Pade approximant [m/n] if used on the Taylor series of an analytic function. Wynn's epsilon algorithm was later shown as an easy way to compute this.

This article really needs to be rewritten, as it presents the entire transformation as being equivalent to the delta-squared process, but somehow conceptually applied to partial sums rather than regular sequences, which is not what it is. — Preceding unsigned comment added by Battaglia01 (talkcontribs) 15:01, 14 July 2019 (UTC)Reply

Generalized Shanks transformation

edit

The generalized Shanks transformation, dealing with several transients, has still to be included. See e.g. Bender & Orszag (1999) pp. 389-392. Which is a true extension as compared with Aitken's delta-squared process. -- Crowsnest (talk) 21:57, 17 February 2009 (UTC)Reply

I added a bit on this to the article. -- Crowsnest (talk) 22:53, 17 February 2009 (UTC)Reply


I don't see any difference beetwen Aitken process and Shanks transformation, they produce identical result numericall and uses exactly the same formulas. Noting that Shanks transorm operates on Partial sums of sequences and Aitken on sequence are nonsensical, because partial sum is also a sequence. Witek —Preceding unsigned comment added by 91.213.255.7 (talk) 00:29, 11 December 2009 (UTC)Reply

I'm also rather curious as to why there's text saying there's a difference, since one can just as easily apply the Shanks transformation (including generalized versions) to sequences that are not partial sums. For example, power iteration diagonalization algorithms produce a sequence of vectors converging to the eigenvector of the largest eigenvalue, and if k+1 eigenvectors appear as components of the starting vector, doing the kth-order Shanks transformation on each element of the vectors gives the exact eigenvector of the largest eigenvalue. Lower-order ones help too, especially when there's an eigenvalue very close to the largest. There's no partial summation involved in it. Ndickson (talk) 02:50, 12 September 2010 (UTC)Reply
I fully agree with that. This is just Aitken's method applied to partial sums. Application of a well-known method to some special case should not allow the introduction of a new name for that. Could be included in the article on Aitken's method like 'Application of Aitken's delta squared process to partial sums is sometimes also called Shanks transformation'. The only thing that seems special here is the generalization to higher order. Ezander (talk) 13:43, 7 February 2011 (UTC)Reply
See e.g. this reference on the differences. Probably the differences between Aitken's process and (generalised) Shanks transformation are not described well in the article, but this may be improved. -- Crowsnest (talk) 13:59, 7 February 2011 (UTC)Reply