Talk:Parallel algorithm
This article is rated Start-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||||||
|
Parallel programming
editI typed in '/wiki/Parallel_programming' and got 'Parallel computing', but this is the page I wanted. —Preceding unsigned comment added by 72.148.222.140 (talk • contribs) 05:30, 14 March 2006
Embarrassingly parallel π algorithms
editThere are entirely ("embarrassingly") parallel algorithms for computing digits of . For example, see D. H. Bailey, P. B. Borwein and S. Plouffe, "On The Rapid Computation of Various Polylogarithmic Constants", manuscript, 1996, which is [http://citeseer.ist.psu.edu/bailey96rapid.html available via Citeseer]. Also, just because an algorithm appears to have linear data dependencies doesn't mean that it can't be effectively parallelized. For details, see G. Blelloch, "Vector Models for Data-Parallel Computing." —Preceding unsigned comment added by Hilbertastronaut (talk • contribs) 16:53, 12 October 2007 (UTC)
Parallel algorithms are valuable
editI completely rewrote the "Parallel algorithms are valuable" paragraph because it seemed a bit confused. Here's what it had said:
- Parallel algorithms are valuable because it is faster to perform large computing tasks via a parallel algorithm than it is via a serial (non-parallel) algorithm, because of the way modern processors work. It is far more difficult to construct a computer with a single fast processor than one with many slow processors with the same throughput. There are also certain theoretical limits to the potential speed of serial processors. Every parallel algorithm has a serial part and so parallel algorithms have a saturation point (see Amdahl's law). After that point adding more processors does not yield any more throughput but only increases the overhead and cost.
I'd already rewritten the first sentence to be more specific:
- Parallel algorithms are valuable because of substantial improvements in multiprocessing systems and the rise of multi-core processors.
Then I realized that the rest of the material had two serious problemms. First, without the appropriate context, the second sentence is exactly backwards. It's typically more difficult to build a multiprocessor system than a uniprocessor system with a given throughput – unless, of course, the uniprocessor itself can't be built. Second, the rest of the paragraph conflated several ideas:
- Theoretical limits to "serial" processor (actually uniprocessor) speed
- Amdahl's law (relationship between serial & parallel components and potential speedup)
- An undefined "saturation point" (presumably the maximum speedup, but no such term is in the article)
- Discussion of overhead (something not included in Amdahl's law, but part of the Karp-Flatt metric)
The result is that it starts out talking about the limitations of serial processing, and winds up using a theory of the limitations of parallel processing to make its point.
After I'd pointlessly tried to straighten out the discussions of parallel processing limitations, I finally just cleaned up the original point: parallel algorithms are valuable. But here's the parallel-problems material I wrote:
- However, parallel algorithms have similar limitations because they always contain some sequential components. Furthermore, overhead in parallel decomposition and collection invokes the law of diminishing returns, putting practical limits on the degree of parallelism for a given algorithm and implementation environment. Amdahl's law, Gustafson's law, and the Karp-Flatt metric attempt to describe the relationships between the components of parallel algorithms and their potential and/or realized benefits.
I'd made it a footnote to the end of the paragraph, because it wasn't immediately obvious to me how to make it flow well with the rest of the article. If someone finds it useful (and accurate, I hope!), feel free to add it back in. ~ Jeff Q (talk) 12:26, 29 August 2009 (UTC)
inherently serial?
editI think that is meant to be inherently sequential. —Preceding unsigned comment added by 119.224.35.68 (talk) 22:40, 31 December 2009 (UTC)
The sieve of Eratosthenes parallelisation
editThis article states that the sieve of Eratosthenes is inherently serial. That is not true. The sieve of Eratosthenes consists of two loops:
- the outer loop iterates over already found prime numbers - this loop is serial, because it uses knowledge of composite numbers from previous iterations to save work;
- the inner loop iterates over the whole prescribed number set to mark multiples of a given prime number (from the outer loop) as composite - this can be done in parallel. — Preceding unsigned comment added by 83.25.127.16 (talk) 08:24, 18 April 2015 (UTC)
- Indeed, removed it from the page. A parallel version can be found here. QVVERTYVS (hm?) 20:42, 15 May 2015 (UTC)
Confusing Lead
edit"In computer science, a parallel algorithm, as opposed to a traditional serial algorithm, is an algorithm which can do multiple operations in a given time.". Can't all algorithms can do multiple operations in a "given time"? Parallel algorithms are defined by allowing operations to be performed at the *same* time are they not? From the reference: "... it is usually necessary to design an algorithm that specifies multiple operations on each step, i.e., a parallel algorithm." Single step is not synonymous with "given time", I think "simultaneously"/"same time" would be more explicit Billwoo2011 (talk) 23:49, 22 January 2021 (UTC)