Evolution of a random network

Evolution of a random network is a dynamical process in random networks, usually leading to emergence of giant components, accompanied with striking consequences on the network topology.

To quantify this process, there is a need of inspection on how the size of the largest connected cluster within the network, , varies with the average degree .[1] Networks change their topology as they evolve, undergoing phase transitions. Phase transitions are generally known from physics, where they occur as matter changes state according to its thermal energy level, or when ferromagnetic properties emerge in some materials as they are cooling down. Such phase transitions take place in matter because it is a network of particles, and as such, rules of network phase transition directly apply to it. Phase transitions in networks happen as links are added to a network, meaning that having N nodes, in each time increment, a link is placed between a randomly chosen pair of them. The transformation from a set of disconnected nodes to a fully connected network is called the evolution of a network.

If we begin with a network having N totally disconnected nodes (number of links is zero), and start adding links between randomly selected pairs of nodes, the evolution of the network begins. For some time we will just create pairs of nodes. After a while some of these pairs will connect, forming little trees. As we continue adding more links to the network, there comes a point when a giant component emerges in the network as some of these isolated trees connect to each other. This is called the critical point. In our natural example, this point corresponds to temperatures where materials change their state. Further adding nodes to the system, the giant component becomes even larger, as more and more nodes get a link to another node which is already part of the giant component. The other special moment in this transition is when the network becomes fully connected, that is, when all nodes belong to the one giant component, which is effectively the network itself at that point.[1]

Conditions for emergence of a giant component

edit

In the Erdős–Rényi model,[2][3] the average degree   of a graph with n vertices and N edges is given by  . The condition for the emergence of a giant component is:

 .

Thus, one link is sufficient for its emergence of the giant component[citation needed]. If expressing the condition in terms of  , one obtains[citation needed]:
  (1)
Where   is the number of nodes,   is the probability of clustering[citation needed]. Therefore, the larger a network, the smaller   is sufficient for the giant component.

Regimes of evolution of a random network

edit

Three topological regimes with its unique characteristics can be distinguished in network science: subcritical, supercritical and connected regimes.

Subcritical regime

edit

The subcritical phase is characterised by small isolated clusters, as the number of links is much less than the number of nodes. A giant component can be designated any time to be the largest isolated small component, but the difference in cluster sizes is effectively negligible in this phase.

 ,  

For   the network consists of   isolated nodes. Increasing   means adding   links to the network. Yet, given that  , there is only a small number of links in this regime, hence mainly tiny clusters could be observed. At any moment the largest cluster can be designated to be the giant component. Yet in this regime the relative size of the largest cluster,  , remains zero. The reason is that for   the largest cluster is a tree with size  , hence its size increases much slower than the size of the network. Therefore,   in the   limit. In summary, in the subcritical regime the network consists of numerous tiny components, whose size follows the exponential distribution. Hence, these components have comparable sizes, lacking a clear winner that we could designate as a giant.[1]

Critical point

edit

As we keep connecting nodes, the pairs joining together will form small trees, and if we keep connecting nodes, a distinguishable giant component emerges at critical point  .

This means that at the moment that each component has on average 1 link, a giant component emerges. This point corresponds to probability p = 1 / (N - 1), as the probability of having a link between two nodes is the ratio of the one case when that one link connect the two randomly chosen nodes, divided by all the other possibilities how that one connection can connect one of the nodes to another node, which is N - 1, as a node can connect to all other nodes but itself (excluding the possibility of a self loop in this model).

This also has the implication, that the larger a network is, the smaller p it needs to have a giant component emerging.

 ,  .

The critical point separates the regime where there is not yet a giant component (   ) from the regime where there is one (   ). At this point, the relative size of the largest component is still zero. Indeed, the size of the largest component is  . Consequently,   grows much slower than the network's size, so its relative size decreases as   in the   limit. Note, however, that in absolute terms there is a significant jump in the size of the largest component at  . For example, for a random network with   nodes, comparable to the globe's social network, for   the largest cluster is of the order of  . In contrast at   we expect  , a jump of about five orders of magnitude. Yet, both in the subcritical regime and at the critical point the largest component contains only a vanishing fraction of the total number of nodes in the network.

In summary, at the critical point most nodes are located in numerous small components, whose size distribution follows. The power law form indicates that components of rather different sizes coexist. These numerous small components are mainly trees, while the giant component may contain loops. Note that many properties of the network at the critical point resemble the properties of a physical system undergoing a phase.[1]

Supercritical regime

edit

Once the giant component had emerged upon passing the critical point, as we add more connections, the network will consist of a growing giant component, and less and less smaller isolated clusters and nodes. Most real networks belong to this regime. The size of the giant component is described as follows  

 ,  .

This regime has the most relevance to real systems, as for the first time we have a giant component that looks like a network. In the vicinity of the critical point the size of the giant component varies as:

 

or

  (2)

where pc is given by (1). In other words, the giant component contains a finite fraction of the nodes. The further we move from the critical point, a larger fraction of nodes will belong to it. Note that (2) is valid only in the vicinity of  . For large   the dependence between   and   is nonlinear. In summary in the supercritical regime numerous isolated components coexist with the giant component, their size distribution following exponential distribution. These small components are trees, while the giant component contains Loops and cycles. The supercritical regime lasts until all nodes are absorbed by the giant.[1]

Connected regime

edit

As connections are added to a network, there comes a point when  , and the giant component absorbs all nodes, so there are no isolated nodes or unconnected components.

 ,  .

For sufficiently large p the giant component absorbs all nodes and components, hence  . In the absence of isolated nodes the network becomes connected. The average degree at which this happens depends on   as  . Note that when we enter the connected regime the network is still relatively sparse, as   for large N. The network turns into a complete graph only at  . In summary, the random network model predicts that the emergence of a network is not a smooth, gradual process: The isolated nodes and tiny components observed for small   collapse into a giant component through a phase.[1]

Examples for occurrences in nature

edit

Water-ice transition

edit

Phase transitions take place in matter, as it can be considered as a network of particles. When water is frozen, upon reaching 0 degree (the critical point) the crystalline structure of ice emerges according to the phase transitions of random networks: As cooling continues, each water molecule binds strongly to four others, forming the ice lattice, which is the emerging network.

Magnetic phase transition

edit

Similarly, magnetic phase transition in ferromagnetic materials also follow the pattern of network evolution: Above a certain temperature, spins of individual atoms can point in two different directions. However, upon cooling the material down, upon reaching a certain critical temperature, spins start to point in the same direction, creating the magnetic field. The emergence of magnetic properties in the structure of the material resemble to the evolution of a random network.[1]

Applications

edit

Physics and chemistry

edit

As we could see in the above examples, network theory applies to the structure of materials, therefore it is also applied in research related to materials and their properties in physics and chemistry.

Particularly important areas are polymers,[4] gels,[5] and other material development such as cellulose with tunable properties.[6]

Biology and medicine

edit

Phase transitions are used in research related to the functioning of proteins or emergence of diabetes on the cell-level.[7] Neuroscience also extensively makes use of the model of the evolution of networks as phase-transition occur in neuron-networks.[8]

Network science, statistics and machine learning

edit

Phase transition of a network is naturally a building block of more advanced models within its own discipline too. It comes back in research examining clustering and percolation in networks,[9] or prediction of node properties.[10]

References

edit
  1. ^ a b c d e f g Albert-László Barabási. Network Science: Chapter 3
  2. ^ Erdős, Paul, and Alfréd Rényi. "On the evolution of random graphs." Publ. Math. Inst. Hung. Acad. Sci 5.1 (1960): 17-60. http://leonidzhukov.net/hse/2014/socialnetworks/papers/erdos-1960-10.pdf
  3. ^ Erdős P., Rényi A. "On random graphs I." Publ. math. debrecen 6.290-297 (1959): 18. http://www.leonidzhukov.net/hse/2016/networks/papers/erdos-1959-11.pdf Archived 2018-07-28 at the Wayback Machine
  4. ^ Samulionis, V., Svirskas, Š., Banys, J., Sánchez-Ferrer, A., Gimeno, N., & Ros, M. B. (2015). Phase Transitions in Smectic Bent-Core Main-Chain Polymer Networks Detected by Dielectric and Ultrasonic Techniques. Ferroelectrics, 479(1), 76-81. doi:10.1080/00150193.2015.1012011
  5. ^ Habicht, A., Schmolke, W., Lange, F., Saalwachter, K., & Seiffert, S. (n.d). The Non-effect of Polymer-Network Inhomogeneities in Microgel Volume Phase Transitions: Support for the Mean-Field Perspective. Macromolecular Chemistry And Physics, 215(11), 1116-1133.
  6. ^ Liu, C., Zhong, G., Huang, H., & Li, Z. (n.d). Phase assembly-induced transition of three dimensional nanofibril- to sheet-networks in porous cellulose with tunable properties. Cellulose, 21(1), 383-394.
  7. ^ Stamper, I., Jackson, E., & Wang, X. (n.d). Phase transitions in pancreatic islet cellular networks and implications for type-1 diabetes. Physical Review E, 89(1),
  8. ^ Lee, KE Lopes, MA Mendes, JFF Goltsev, AV (2014). "Critical phenomena and noise-induced phase transitions in neuronal networks". Physical Review E. 89 (1): 012701. arXiv:1310.4232. Bibcode:2014PhRvE..89a2701L. doi:10.1103/PhysRevE.89.012701. PMID 24580251. S2CID 275671.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  9. ^ Colomer-de-Simon, P., & Boguna, M. (2014). Double percolation phase transition in clustered complex networks.
  10. ^ Zhang, P., Moore, C., & Zdeborová, L. (2014). Phase transitions in semisupervised clustering of sparse networks.