This article is rated Stub-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||
|
Confusing paragraph
edit"According to Moore's law, each new technology generation doubles number of transistors. This increases their speed by 40%[citation needed]. On the other hand, Pollack's rule implies that microarchitecture advances improve the performance by another 40%[citation needed]."
Especially the first sentence is very confusing. Is this meant to imply that the clockspeed increases by 40%? Isn't Pollack's rule applied twice here otherwise? — Preceding unsigned comment added by 90.186.63.67 (talk) 01:28, 14 December 2018 (UTC)
History
editAddressing the question about circularity from user 90.186.63.67, and providing some background. This background, unfortunately, has me as the primary source, although Fred and others may vouch for it. There may be historical posts and mentions in places like the original net.arch and comp.arch USENET groups, but I'm too lazy to go and find them right now.
I may be the source of "Pollack's Law". Fred Pollack was my first manager at Intel. I mentioned the following reasons that I believed made it unreasonable to hope for linear speed up asymptotically:
In my pre-Intel research into ILP, I had encountered things like Tjaden and Flynn (?) who said that "Performance increases as the square root of the number of branches looked ahead". This square root factor was all over the place.
There was another interesting paper that said that the maximum performance you could ever hope for, theoretically, for N processors packed into a K dimensional space was related to the radius of a K-dimensional hypersphere. Assuming each processor has volume 1, easiest assuming that they are unit cube or unit sphere. Packing not terribly relevant to O(...) magnitudes. Assume that any single computation must eventually collect information from all processors, possibly by a transitive forwarding. Assuming a fixed speed for such propagation of information, like the speed of light. Simplistically, the power (K-1/K) in K dimensions. This becomes square root in two dimensions.
Fred put this hypothesis into a slide, and presented it several times within Intel. I believe that Shekhar Borkar presented this at public forum, although I do not remember at this point whether it was in ASPLOS or ISCCC or something else.
Anyway, this leads to several answers to the question from 90.186.63.67:
Pollack's Law doesn't necessarily make any statement about clock speed.
E.g. if you increase from M to N processors, all at the same clock speed, Pollack's Law expects sqrt(N/M) speedup. This is independent of the shrinkage in device dimensions, unless the system with larger physical distance between individual notes is already constrained by light speed limits.
E.g. if you have a fixed surface area (e.g. chip), and you decrease dimensions by scaling factor S, e.g. sqrt(2) to double the number of devices per chip, then Pollack's Law expects speed up sqrt(2). i.e. speed up proportional to the 1D scale factor, not the 2D area scale factor - whereas number of processors is proportional to the 2D scale factor,
This is related to clock frequency in a synchronous architecture, but this statement or hypothesis also applies to asynchronous architecture. — Preceding unsigned comment added by A.Glew (talk • contribs) 19:24, 14 October 2020 (UTC)