Talk:Prim's algorithm

Latest comment: 1 month ago by JustinBlank in topic Linear Time For Dense Graphs

Usage in 3d printing

edit

The algorithim is used within the 3d printing slicer cura to generate tree supports, could a note about this application (along potentially with others be added)? (source of use in cura https://github.com/Ultimaker/CuraEngine/pull/655/commits/5004290e594332d23dcb94cc776144b5804b72e8 ) — Preceding unsigned comment added by Gsykesvoyage (talkcontribs) 14:33, 22 October 2020 (UTC)Reply


Developed by Vojtech Jarnik?

edit

Why is it stated that the algorithm was originally developed by Vojtech Jarnick? The cited paper actually contains description of Boruvka's algorithm (initially all vertices are fragments, and on each step a minimum edge for each fragment is added to MST, until there remains only one fragment). Reminiscenza (talk) 19:14, 30 May 2015 (UTC)Reply

In the cited paper, the relevent part is on page 60. Note that in step 1, he chooses any one vertex and in step k, he chooses one vertex from the vertices 1-k and one vertex from (k+1)-n. The Germant translation explains that same algorithm. The PDF I checked for is this one: http://dml.cz/bitstream/handle/10338.dmlcz/500726/Jarnik_01-0000-31_1.pdf Petr Hudeček (talk) 15:10, 11 June 2015 (UTC)Reply

complexity by binary heap

edit

Is "(V+E)log(E)" really correct? Thinking a heap H of edges which are directly connecting to the growing spanning tree. All edges can be added to and removed from the heap H. (Every edge can be adjacent to the growing spanning tree. This way edges can be added to the heap. One edge can be added to the spanning tree, or become loop-making one and get disqualified.This way they can be removed so that edges heap becomes {}.) I suspect that (V+E)log(E)-> E log(E) argument is missing something. I suspect (E+E) log(E) -> E log(E) is an exhaustive argument. --Timothy Redt (talk) 01:08, 22 July 2012 (UTC)Reply

C++ code for Prim's using adjacency matrix A

edit

A[i][j] is a distance from node i to node j. Sentinels NONE and INF are used to avoid complex logic. The code is written in "computer olympiad style", using static allocation over STL containers or malloc'd memory. Jettlogic 15:35, 11 October 2006 (UTC)Reply

 #include <algorithm>
 using namespace std;
 #define MAXN 5
 const int NONE = MAXN;
 const int INF = 1000;
 void mst_prim(int N, int A[MAXN][MAXN], int parent[MAXN+1]) {
     bool intree[MAXN+1];
     int distance[MAXN+1];
     fill(distance, distance+MAXN+1, INF);
     fill(parent, parent+MAXN+1, NONE);
     fill(intree, intree+MAXN+1, false);
     int u = 0;
     while(u != NONE) {
         intree[u] = true;
         // next u is closest not-in-tree vertex
         int nu = NONE;
         for(int v = 0; v < N; v++)
             if(!intree[v]) {
                 if(A[u][v] < distance[v]) {
                     distance[v] = A[u][v];
                     parent[v] = u;
                 }
                 if(distance[v] < distance[nu])
                     nu = v;
             }
         u = nu;
     }
 }

Colored lines in graph

edit

Are there red and green lines on this graph? I am colourblind and cannot tell. A more apropriate choice of colours would be red and blue.

Ok, I made some new graphs:

 
The current image - no change
 
Thicker and thinner lines
 
Dotted and dashed lines
 
Opacity changes
Which type would work best? Shen 16:44, 17 February 2006 (UTC)Reply
I made some new images using thinner and thicker lines, it should be clearer now. Shen 22:06, 19 February 2006 (UTC)Reply
I'd urge the use of colours vs. different sorts of lines here, since "graph colouring" is a very common term in the literature of computer science, and is a de facto standard way of denoting groups of edges or nodes, in discussion even more than pictorially. Explaining that the colouring is represented by dashes and whatnot would be convoluted. However using colours accessible to the colourblind is always a good idea. Also Shen, I don't think the current choice of colours is a good one (though I am not colour blind and therefore cannot tell if they are differentiable). See this article. DaveWF 07:37, 14 February 2007 (UTC)Reply

Suggestion of a better explanation

edit

I propose to replace the item list in the beginning of the page with this:

  1. Choose a vertex of the graph G as initial tree E
  2. Create a set of all edges in G connecting a vertice of E to a vertice outside of E
  3. Out of this set, choose an edge with the lowest weight and add it with the respective vertice to E
  4. Repeat 2-3 until all vertices of G are in E

E is the minimum spanning tree.


I agree with this one. The step "remove from the set an edge with minimum weight that connects a vertex in the tree with a vertex not in the tree" is imho not O(log E). --Pukeye 09:04, 28 August 2006 (UTC)Reply

It is if you use a heap.

Don't Forget!

edit

I agree that a better pseudo algorithm is needed here I also think that you NEED to mention that a edge should only be used if it results in a cycle, which is obvious to some, but really should be spelt out.

Don't you mean if it *doesn't* result in a cycle? - Solberg 06:57, 7 July 2006 (UTC)SolbergReply
He must've. You only add an edge if it's of minimum cost and doesn't cause a cycle. DaveWF 07:42, 14 February 2007 (UTC)Reply

Something isn't right

edit

From Cormen et all, the psuedocode for this Algorithm is: (Used only for discussion, do not use in article)

MST-PRIM(G, w, r)
01  for each u ∈ V [G]
02    do key[u] ← ∞
03      π[u] ← NIL
04  key[r] ← 0
05  Q ← V [G]
06  while Q ≠ Ø
07    do u ← EXTRACT-MIN(Q)
08      for each v ∈ Adj[u]
09        do if v ∈ Q and w(u, v) < key[v]
10          then π[v] ← u
11            key[v] ← w(u, v)

From this it appears that the explanation above (and in the article) is wrong, also it appears to me that the graphs are wrong.

The important difference: For each iteration of the while loop (lines 6-11), the minimum accessible vertex (and not edge) is used.

For this vertex u, each adjacent vertex v which hasn't been explored is considered and if the 'cut' crossing to it from u is least found so far, u is set as parent to v and v's key is updated.

The algorithm's progress should be as follows (when starting from the article's graph):

When starting from a (in the graphs), both b and d should have been connected to a at the first iteration. The next iteration would have explored d as it is the vertex with the least key that is still in the queue.

From d, the exploration would have considered b but would not have connected them in the MST, because b's weight from d is larger than from a. Also e and f would have been connected initially from d, but later e might have been updated to connect from b when it is explored. --Opium 12:01, 5 September 2006 (UTC) Updated again Opium 08:40, 6 September 2006 (UTC)Reply

You are correct, how come this still isn't fixed? The algorithm in this page is simply WRONG! Yomach (talk) 21:58, 25 June 2010 (UTC)Reply
  • I don't think that's right. From my understanding, Prim's works like this:
- Let T the subtree of G consisting solely of the starting node
- While there are nodes in G that are not in T:
  - Find the node in G but not in T which has the least edge connecting to a node in T
  - Add that node and the corresponding least edge to T
- T is then the minimum spanning tree

Corrections

edit

I agree with the last poster directly above this ("I don't think that's right...").. I made some corrections to the algorithm description, updated the runtime costs section, and tied the pseudocode to the initial algorithm description. The approach presented is a reworking of the ideas presented in n the Levitin 'Design and Analysis of Algorithms' text. To me the pictorial description of the process is fine. Let me know what you think.. maybe we can remove the 'contradictions' tag? --Turketwh 05:06, 17 November 2006 (UTC)Reply

No, I believe that is too simple. Opium's interpretation of EXTRACT-MIN(Q) seems to be incorrect: the function chooses the smallest edge between two vertices. To clarify things a little, this is how I learned Prim's algorithm:
  • Set the root of tree A as a random node
  • Array S of all edges
  • Array Q of 'unused vertices'
  • While Q is not empty:
    • Remove the smallest edge from S that connects a vertex a in A to a vertex q in Q
    • Remove q from Q
    • Add q to A, under vertex a
This is how I interpreted the section Prim's algorithm in Introduction to Algorithms (chapter 23, section 2, ISBN 0-262-53196-8). I suggest somebody that can explain this more intuitively edit the article accordingly. --Stimpy 16:08, 17 December 2006 (UTC)Reply

pseudocode cleanup

edit

I cleaned up the code a lot, all of the "u"s and "v"s and "key"s are very confusing and difficult to read. I hope my changes are acceptable. --gatoatigrado 22:27, 16 January 2007 (UTC)Reply

Sorry, there was a very important mistake that I made. The average time complexity for updating the adjacent vertices, E/V, can be used. --gatoatigrado

edit to time complexity?

edit

I don't know about the fibonnaci heap (and I honestly haven't implemented the adjacency matrix, but an implementation to achieve the desired time complexity seems relatively easy), so please edit that and then perhaps the description of the time complexity can be changed.

Minimum edge weight data structure Time complexity (total) TC removal of 1 vertex TC edge update (also for each vertex)
adjacency matrix, searching V^2 V E/V
binary heap (as in pseudocode below) and adjacency list O((V + E) log(V)) = E log(V) log(V) E/V * log(V)
Fibonacci heap and adjacency list E + V log(V) log(V) E/V

All of the algorithms have do |V|-1 iterations. The adjacency matrix implementation takes O(V) time to find the minimum, whereas the other implementations including the fibonnaci heap? take O(log(V)) time. As described above, all of vertices adjacent to the last vertex added are updated. There are, on average, E/V adjacent vertices. The adjacency matrix and fibonnaci heap ? take constant time to update the minimum distance, whereas the binary heap takes log(V) time. The Fibonacci heap is significantly faster when the graph is dense enough that E is  (Vlog V). --gatoatigrado

It's sort of correct, sort of not. First, it would be clearer if you listed the time to update a single edge, because it's possible that some vertices have many more than E/V edges to update and that others have many fewer. Yes, it averages out, but if you think the table needs to be expanded to describe the time per operation then you might as well describe the time per operation and not the time per aggregated average of some sequences of operations. Second, the Fibonacci heap does take O(log V) to remove the minimum vertex (that, rather than finding the minimum, is the slow part for both it and the binary heap) and constant time to do an edge update. But both of those time bounds are amortized, meaning again that it's true only when averaged over a sequence of operations. Individual remove-min or edge updates can take much longer.
If you're going to update this, I think it would make sense to include also a k-ary heap. There seems to be no WP article on it but it's just like a binary heap with a higher branching factor. By using a branching factor of k, one can make the edge update time be O(log(V)/log(k)) at the expense of making the remove-min time go up to O(k log(V)/log(k)) — choosing k=E/V gives an algorithm with total time O(E log(V)/log(E/V)). It's not as good as Fibonacci theoretically but much simpler in practice. —David Eppstein 03:18, 20 February 2007 (UTC)Reply
A one-shot search of the adjacency matrix for the minimum edge leaving the tree/cloud is Theta(E), not O(V). Note that you cannot simply check the edges for one of the vertices in the tree. Hence, without any smart bookkepping, which doesn't seem implied, e.g. Cormen's has a complexity of O(VE), not O(V^2). What is the proposed algorithm for the adjacency matrix search with O(V^2) worst-case complexity? C. lorenz (talk) 16:44, 25 September 2008 (UTC)Reply
Maintain throughout the algorithm a vector of distances from the tree to the remaining nodes. Whenever each new node is added to the tree, use its row of the distance matrix to update the tree's distance vector. Scanning the tree's distance vector to find each new nodes takes O(V) and updating the distance vector also takes O(V). —David Eppstein (talk) 00:51, 27 September 2008 (UTC)Reply
I'd say that calls for some bookkeeping, but it's simple enough and I should have thought of it. Thanks, and sorry. —c. lorenz (talk) 14:48, 7 October 2008 (UTC)Reply
Just a note to clarify the above explanation (wasn't obvious to me). You keep an array of distances from each remaining node to closest node in tree. When one node is added to the tree, you only need to check if the distances of remaining nodes to new node is smaller than previous closest. This makes it < |V| checks for every node added -> O(V). — Preceding unsigned comment added by 212.250.200.210 (talk) 09:19, 27 October 2015 (UTC)Reply

Isn't there anything wrong in the example ?

edit

Please be certain about this sentence : "vertex D has been arbitrarily chosen as a starting point." , i think the starting point is A not D, if i am right , then you have to change the description of the second step in addition to the first step : i suggest the following :


This is our original weighted graph. This is not a tree because the definition of a tree requires that there are no circuits and this diagram contains circuits. A more correct name for this diagram would be a graph or a network. The numbers near the arcs indicate their weight. None of the arcs are highlighted, and vertex A has been arbitrarily chosen as a starting point.


The second chosen vertex is the vertex nearest to A: D is 5 away, B is 7. Of these, 5 is the smallest, so we highlight the vertex D and the arc DA. --Tameralkuly 18:10, 26 January 2007 (UTC)Reply

No, there is nothing wrong with this example. Just because A is the first letter in the alphabet does not mean you have to start with A. In fact, I think starting from D is more instructive because it demonstrates the arbitrariness. As an expert in Prim's algorithm, I feel confident this is the case and have removed the cleanup tag. Epachamo 16:30, 23 April 2007 (UTC)Reply

Weights not negative?

edit

Why can't the graph contain negative weights in the example? Isn't this a little confusing for people who just learned Dijkstra? —Preceding unsigned comment added by 78.21.33.78 (talk) 05:48, 29 May 2010 (UTC)Reply

Simpler end of proof

edit

Proof can be made slightly simpler at the end. We are in case Y != Y_1, and Y_1 is minimal spanning tree. There two possibilities, Y is also minimal spanning tree (other one with the same sum of the weight), which is OK, or Y have different sum of weight. If it have different number, then if must of course have more than sum for Y_1. At the ending wee see that Y_2, constructed from Y by removing f and adding e, have sum of weights smaller (or equal) than Y itself, and is again spanning tree. If it is equal then it s OK for us. If it is greater, this leads to contradiction (Y_2 cannot have smaller sum of weight from Y_1, which already have smallest possible one). Therfore Y != Y_1, and sum of Y < sum of Y_1, is impossible. End of proof. --91.213.255.7 (talk) 04:28, 7 January 2011 (UTC)Reply

All black image: Maze Generation

edit

The "Maze Generation" image used in this article renders all black and doesn't change. I've viewed this article with Chrome, Firefox and IE under Windows XP and they're all the same. When I go directly to the GIF, I can see that it's actually an animated GIF demonstrating the generation of a maze. Seeing how this image is only indirectly related to the topic (even if it were working properly), I think this image should be removed. Justin W Smith talk/stalk 22:00, 17 January 2011 (UTC)Reply

to me it also seems that this maze is not generated with Prim's algorithm, as there are multiple "maze compononets" that grow into each other. Prim's algorithm only has one tree that grows to become a spanning tree. --134.96.156.103 (talk) —Preceding undated comment added 10:13, 21 January 2011 (UTC).Reply

Djikstra and Prim algorithms

edit

It is not clear the meaning of the sentence saying that Dijkstra "rediscovered" the algorithm: it seems to suggest that Prim's algorithm and the famous Djikstra's shortest path algorithm are the same, while they solve two different problems (minimum spanning tree and single-source shortest path problem). The quote should be made more precise or eliminated — Preceding unsigned comment added by 93.36.87.111 (talk) 12:01, 24 January 2014 (UTC)Reply

No, it means precisely what it says: Dijkstra rediscovered the same minimum spanning tree algorithm. —David Eppstein (talk) 17:09, 24 January 2014 (UTC)Reply

File:MAZE 30x20 Prim.ogv to appear as POTD

edit

Hello! This is a note to let the editors of this article know that File:MAZE 30x20 Prim.ogv will be appearing as picture of the day on June 12, 2015. You can view and edit the POTD blurb at Template:POTD/2015-06-12. If this article needs any attention or maintenance, it would be preferable if that could be done before its appearance on the Main Page. Thanks! — Chris Woodrich (talk) 23:50, 22 May 2015 (UTC)Reply

A video showing the generation of a maze through the application of Prim's algorithm to a randomly weighted grid graph. This greedy algorithm, named for Robert C. Prim, finds a minimum spanning tree for a connected, weighted, undirected graph.Animation: Dllu

Linear Time For Dense Graphs

edit

The article currently claims "However, for graphs that are sufficiently dense, Prim's algorithm can be made to run in linear time, meeting or improving the time bounds for other algorithms.[10]". The reference is to Tarjan, "Data Structures and Network Algorithms", page 77. While I am only now learning about Prim's algorithm, I do not believe this is accurate. The page states:

The running time of this implementation is dominated by the heap operations, of which there are n — 1 deletemin operations, n — 1 insert operations, and at most m — n + 1 siftup operations. By the analysis of §3.2, the total running time is 0(n d logd n + m logd n).

I cannot find anything on that page that indicates that the algorithm is ever linear. JustinBlank (talk) 19:06, 27 October 2024 (UTC)Reply