1. Introduction
Network dismantling is a pivotal issue in complex network science, captivating a broad array of researchers [
1,
2]. This pursuit involves identifying the smallest set of nodes whose removal would significantly impair or completely disable the functionality of the network [
3]. The implications of resolving this problem are vast, spanning numerous practical applications. A primary application lies in cybersecurity [
4,
5], where identifying and disabling key nodes within a network can prevent the spread of malware or neutralize coordinated cyber threats. In social network analysis, dismantling techniques are used to assess the resilience of social structures and to counteract the spread of misinformation by disrupting influential users [
6]. Besides, network dismantling finds relevance in infrastructure resilience [
7], where critical nodes in transportation or utility grids may be strengthened or redundancy introduced to prevent catastrophic failures, or in drug discovery [
8], where identifying and targeting key protein interactions within a cellular network can lead to novel treatments. Each of these applications underscores the profound impact of effective network dismantling across various fields.
To quantify network functionality, an appropriate measure must be defined based on the specific application scenario. Network connectivity is often regarded as a key indicator due to the necessity for most network applications to operate in a connected environment [
9]. Common measures of connectivity include the number of connected components [
10], pairwise connectivity [
11], the size of the giant connected component (GCC) [
12], and the shortest path length between specific nodes [
13].
The size of the GCC is particularly significant because of its relevance to both the optimal attack problem, which aims to minimize the GCC, and the optimal spreading problem under linear threshold spreading dynamics [
14]. Consequently, previous research on network dismantling has mainly focused on reducing the number of nodes in the GCC. However, this approach often neglects the importance of high-degree nodes. By concentrating on a single objective, such strategies may overlook the critical role that highly connected nodes in GCCs play in maintaining network integrity and functionality.
Approaches that solely focus on the size of GCC, such as FINDER [
14,
15], exhibit several notable drawbacks. Researchers have observed a slow start in its strategy dismantling process and tend to initially target peripheral nodes for removal, which is sub-optimal for real-world scenarios where eliminating critical nodes in GCCs could be more effective. Targeting critical nodes early has the advantage of potentially creating a fragile, chain-like structure with direct paths, leading to quicker network disintegration. Thus, the initial focus on peripheral nodes by GCC-only strategies reduces their overall efficacy by delaying the emergence of critical structural vulnerabilities needed for efficient dismantling.
In the highest degree algorithm (HDA) [
16], at each iteration, the node with the highest degree is systematically removed from the network. This removal is followed by an update of the node degrees, and the process is repeated until the network is entirely cycle-free.
Inspired by the observation of slow start, which typically occurs in the GCC-only dismantling method, and building upon the principles of HDA methods, we introduce a dual metric approach that evaluates network dismantling based on the size of the GCC and the maximum degree within the GCC. Additionally, minimizing the area under the curve while removing the least number of nodes is recognized as an NP-hard problem. To enhance computational efficiency, we reformulate this problem as a Markov Decision Process (MDP) and propose a novel algorithm, MaxShot. Our algorithm harnesses graph representation learning and reinforcement learning to develop heuristic strategies aimed at optimizing the dual metric in the dismantling process.
Extensive experiments have been conducted across both synthetic graphs and real-world datasets, with the latter comprising tens of thousands of nodes and edges. The results demonstrate that the proposed MaxShot model generally outperforms existing methods and also exhibits a considerable speed advantage.
In summary, our contributions can be summarized as follows:
We present a novel dual metric that simultaneously considers the size of the GCC and its maximum degree during the network dismantling procedure. To tackle the optimization challenge, we propose an end-to-end learning method, MaxShot, which facilitates the training of an agent on synthetic graph data, enabling direct application to real-world networks.
Extensive experiments have been conducted to assess the performance of our model. The findings demonstrate that MaxShot surpasses current state-of-the-art methods in both accuracy and computational efficiency.
The rest of the paper is structured as follows:
Section 2 reviews related work, while
Section 3 covers the relevant preliminaries.
Section 4 and
Section 5 delve into our MaxShot model and the experimental setup, respectively. Finally,
Section 6 summarizes our findings and suggests potential directions for future research.
2. Related Works
Within the sphere of complex network analysis, the exploration and enhancement of network robustness and resilience via the process of network dismantling has emerged as a critical domain of scholarly inquiry. Research in complex network disintegration now transcends traditional methods, encompassing machine learning and reinforcement learning methodologies. This paper will sequentially address the current state of research in these three areas of network dismantling.
Traditional Network Dismantling. Traditional methods for network disintegration primarily rely on percolation theory within network science, which studies the vulnerability of networks when nodes or edges are removed. Specifically, a suite of metrics are employed to assess the pivotal roles of individual nodes. Commonly used metrics include node degree, betweenness, and closeness [
17], alongside centrality indices like betweenness centrality [
18], closeness centrality [
19], and PageRank [
20]. Besides, metrics such as components [
21], pairwise connectivity [
11], the largest connected component size [
22], etc.are commonly used as well. However, these conventional tactics are often limited by their requirement for prior knowledge of the network’s structural attributes, thereby restricting their efficacy in the context of intricate networks.
Machine Learning-based approaches. In the domain of machine learning, methodologies such as deep reinforcement learning and algorithmic learning-based techniques present innovative frameworks for the identification of pivotal entities in network disintegration processes. Illustrative of these advancements are the FINDER [
14,
15] and CoreGQN [
23] approaches, which harness synthetic network architectures and self-play strategies to train models that demonstrate superior performance over conventional tactics. Besides, GDM [
24] effectively dismantles large-scale social, infrastructure, and technological networks by identifying patterns above the topological level, and it can quantify system risk and detect early warning signs of systemic collapse. These methodologies provide expedited and scalable solutions to the NP-hard challenges inherent in network science.
Reinforcement Learning-based approaches. Reinforcement learning approaches include deep reinforcement learning, multi-agent reinforcement learning, and model-based reinforcement learning. Deep reinforcement learning, exemplified by the DQN algorithm [
25], integrates the powerful representational capabilities of deep learning with the decision-making prowess of reinforcement learning, enabling models to make optimal decisions within complex network entities. Model-based reinforcement learning approaches, such as PILCO [
26], make decisions by learning the dynamics of the environment, which promotes efficient data utilization and accuracy, despite challenges like limited generalization, high modeling complexity, and increased training costs. Multi-agent reinforcement learning, as seen in Nash Q-Learning [
27] and NFSP [
28], involves multiple agents in a collaborative effort to dismantle networks, offering advantages such as swift decision-making and diverse strategies, while also contending with issues such as uncertain cooperative mechanisms and limited information acquisition.
3. Preliminaries and Notations
3.1. Network Dismantling
In a network
, where
V represents the set of nodes and
E is the set of edges, the Giant Connected Component (GCC) is characterized as the largest connected component that contains a significant proportion of the nodes in the entire network [
29]. A pivotal assumption in network theory posits that only interconnected subnetworks can preserve their operational integrity [
29]. Consequently, the GCC not only offers crucial insights into the network’s overall structure [
30] but is also instrumental in determining the system’s robustness and resilience in response to perturbations [
29].
Network dismantling aims to identify a subset of nodes , whose removal leads to the fragmentation of the network such that the GCC size does not surpass a predefined threshold C. Denote as the size of the GCC in , where implies . Then S qualifies as a C-dismantling set if and only if . The network dismantling problem can be formalized as an optimization problem, whose objective is minimizing while ensuring . Where signifies the count of nodes in the subset S, and C is a designated threshold.
3.2. Graph Neural Networks
Graph Neural Networks (GNNs) is a neural network architecture specifically designed to process graph-structured data [
31,
32,
33,
34]. Consider a graph
, each node
is associated with a feature vector
. The objective of a GNN is to learn a function
f that maps the input graph
G and its node features
to a set of output predictions or refined node representations. Let
denote the representation of node
v at layer
l of the GNN. The transition from layer
to layer
l is governed by the following update rule:
where
denotes the set of nodes that have direct edges to
v, and
represents the set of edges from node
s to node
v. The functions
and
are pivotal to the operation of GNNs. The
function involves isolating pertinent information from neighboring nodes, utilizing the target node’s previous layer representation
and the edge
to distill insights from
. The
function is responsible for integrating this neighborhood information, which may employ straightforward methods such as summation or averaging, or more sophisticated techniques like pooling.
3.3. Reinforcement Learning
Reinforcement learning (RL) involves training an agent to make sequential decisions within an environment, aiming to optimize cumulative rewards or attain specific goals [
35]. The essence of reinforcement learning involves modeling decision-making scenarios, commonly through a Markov Decision Process (MDP) when the environment is fully observable [
36]. An MDP is characterized by the tuple
, where
S denotes the state space,
A the action space,
P the transition probability distribution,
R the reward function, and
the discount factor [
36]. The transition probability is defined as
.
At each time step
t, the agent, situated in state
, selects the optimal action
based on the policy
. Subsequently, the environment transitions to a new state according to the state transition function
and provides a reward
to the agent. The agent’s goal is to maximize the cumulative reward,
[
37]. The policy
denotes the agent’s strategy, mapping states to actions; different policies yield unique paths of exploration. Value functions are employed to assess the desirability of states and actions, including the state value function and the action value function [
36]. The state value function is formulated as:
Here,
represents the transition function from a state-action pair to the subsequent state-reward pair, and
is the value of the subsequent state
. This equation is known as the Bellman equation for
, encapsulating the relationship between the current state value and future state values. The state-action value function, denoted as
, quantifies the expected return for all feasible decision sequences that initiate from state
s and follow action
a according to policy
. It is defined as:
4. Methodology
In this section, we introduce our innovative framework, MaxShot, crafted to efficiently target the removal of nodes with the highest degrees within the GCC. The MaxShot framework conceptualizes the network dismantling task as an MDP:
State (): This encapsulates the current sizes of the remaining GCC in the graph.
Action (): This involves selecting and removing a node from the active GCC.
Reward (
): Defined as the relative change in a specific dual metric calculated before and after the node removal.
where,
denotes the graph size. As we wish to minimize the score and RL seeks to maximize the accumlative rewards, there is a negative mark.
Terminal State: This occurs once the GCC is completely eliminated.
We will delve into the MaxShot’s architecture, elaborate on the training methodology, and analyze its computational complexity.
4.1. Architecture of MaxShot
Figure 1 depicts the architectural outline of the MaxShot framework. The MaxShot algorithm proposed herein utilizes a fundamental encoderâdecoder framework. In conventional encoding strategies, nodes and graphs are frequently represented using manually crafted features, such as global or local degree distributions, motif counts, and similar metrics. These traditional approaches are typically customized on a case-by-case basis and can often fall short in achieving optimal performance outcomes.
In the encoding phase, we employ GraphSAGE [
38] as our feature extraction engine, targeting the entire graph. By converting intricate network topologies and node-specific details into a unified, dense vector space, we enhance both representation and learning capabilities. GraphSAGE’s merit lies in its scalability; through neighborhood sampling and support for mini-batch training, it efficiently processes large graphs. Furthermore, its inductive learning capacity ensures it can generalize to unseen nodes, a crucial attribute for the dynamic nature of many networks, such as those in network dismantling scenarios. To further amplify the model’s representational power, we introduce a virtual node concept that effectively embodies global graph characteristics. Since GraphSAGE’s parameters remain robust regardless of graph size, this virtual node approach seamlessly extends to dynamic graphs, thereby enhancing model adaptability.
In the decoding phase, multi-layer perceptrons (MLPs) equipped with ReLU activation function are utilized to transform the encoded state and action representations into scalar Q-values, which represent potential long-term returns. This approach effectively translates action node vectors, along with their associated graph vectors, into Q-values. The Q-value serves as a critical metric for action selection. The agent employs this heuristic in a greedy, iterative manner, always choosing the node with the highest Q-value. This process continues until the network is transformed into an acyclic structure, guaranteeing the removal of all cycles.
4.2. Training Algorithms
The computation of the Q-score is carried out by the encoder-decoder architecture, parameterized by
for the encoder and
for the decoder. In our approach to training this model, we implemented the Double DQN method as delineated in [
39], which aims to fine-tune these parameters by performing gradient descent on the sampled experience tuples
. One significant advantage of the Double DQN methodology is its mitigation of the overestimation bias typically associated with traditional DQN. It leverages dual distinct neural networks for the separate tasks of action selection and action value evaluation, resulting in a more precise estimation of Q-values. This improvement translates into enhanced stability and faster convergence rates during the training phase, ultimately leading to superior performance metrics, especially in complex operational contexts like network dismantling.
The goal of our training objective revolves around the minimization of the loss function, characterized as:
In this study, state-action-reward-next state tuples
are sampled uniformly at random from the replay buffer
, where each
. The target network, denoted as
, undergoes parameter updates from the Q network every
C intervals and its parameters remain static between updates. For training, synthetic Barabási-Albert (BA) graphs are generated. The training episodes consist of sequentially removing nodes from a graph until the GCC becomes null. An episode’s trajectory encompasses a sequence of state-action transitions
. An
-greedy policy is followed during training, beginning with
at 1.0 and gradually reducing it to 0.01 over a span of 10,000 episodes, achieving a balance between exploration and exploitation. During the inference phase, nodes are removed considering the highest Q-scores until reaching the terminal state. After completing each episode, the loss is minimized by applying stochastic gradient descent on randomly sampled mini-batches from the replay buffer. The full training methodology is elucidated in Algorithm 1.
Algorithm 1: Training Procedure of MaxShot |
- 1:
Initialize experience replay buffer B
- 2:
Initialize the parameters for GraphSage and MLP to parameterize the state-action value function
- 3:
Parameterize target Q function with cloned weights
- 4:
for episode = 1 to N do
- 5:
Generate a graph G from the BA model
- 6:
Initialize the state to an empty sequence
- 7:
for to T do
- 8:
Select a node for removal based on with -greedy
- 9:
Remove node from current graph and receive reward
- 10:
Update state sequence
- 11:
if then
- 12:
Store transition into the buffer B
- 13:
Sample random a batch of transitions from B
- 14:
Set
- 15:
Optimize to minimize
- 16:
Every C steps, update
- 17:
end if
- 18:
end for
- 19:
end for
|
4.3. Computational Complexity Analysis
The time complexity of the MaxShot algorithm can be succinctly captured by the expression , where T represents the number of layers within the GraphSAGE architecture, denotes the comprehensive count of edges present in the given graph and t accounts for the cumulative number of nodes that are sequentially removed until the GCC is entirely eradicated. By leveraging advanced sparse matrix representations to model the graph structure, MaxShot is remarkably proficient at managing the immense and intricate graphs that typically arise in real-world applications. This proficiency highlights the model’s inherent scalability and robust performance, ensuring it is well-suited for the demanding and large-scale computational tasks encountered in contemporary data-driven environments.
5. Experiments
5.1. Settings
We validate the efficacy of the proposed MaxShot model against several widely-used algorithms: HDA, HBA, HCA, and HPRA on simulated graphs. We utilized the BA network model (where
) to create 100 synthetic graphs for each of the following node ranges: 30â50, 50â100. This provided a comprehensive evaluation across various scales of simulated networks. To evaluate performance on real-world networks, we chose HDA, CI, MinSum, CoreHD, BPD, and GND as our reference methods. Four real-world datasets were selected from SNAP Datasets to evaluate the performance of our MaxShot model, as shown in
Table 1. The details of these benchmark methods are elaborated below.
High-Degree Algorithm (HDA) [
40] removes nodes from the network based on the number of connections (degree) they have, prioritizing those with the highest degrees, and persisting until the network is devoid of cycles.
High-Betweenness Algorithm (HBA) [
41] targets nodes with the highest betweenness centrality, which measures the number of shortest paths passing through a node.
High PageRank Removal Algorithm (HPRA) [
42] targets nodes distinguished by their superior PageRank scores, akin to a popularity score.
High Closeness Algorithm (HCA) [
43] removes nodes with high closeness centrality, which are the ones most central to the network (closest to all other nodes), to maximize the increase in distances within the network and cause the greatest fragmentation.
Collective Influence (CI) [
44] prioritizes nodes for removal based on a concept called collective influence, combining local information about node degrees and a measure of global network influence to identify key nodes whose removal maximally disrupts the network.
MinSum [
45] estimates the influence of nodes by minimizing the total weight (or cost) of nodes’ influence spread across the network.
CoreHD (High-Degree Core) [
46] focuses on nodes within the k-core of the network (a subgraph where each node has at least k connections) and repeatedly removes nodes with the highest degree, aiming to collapse the core structure effectively.
Belief Propagation Decimation (BPD) [
47] utilizes the principles of belief propagation to iteratively update the probabilities associated with node states.
Generalized Network Dismantling (GND) [
48] leverages generalized optimization techniques that consider various structural and dynamical properties for effective network dismantling.
The training trajectories span 50,000 episodes, with a replay memory that retains up to 20,000 of the latest transitions. To gauge model efficacy, we evaluate it after every 300 episodes using a dataset comprising 100 synthetic graphs, each mirroring the dimensions of the training graphs. We then record the mean performance metrics obtained during these evaluations. The hyper parameters of MaxShot are shown in
Table 2
5.2. Results on Synthetic Dataset
Unlike the traditional dismantling methods of other GCC-only strategies, MaxShot selects the highest degree node while reducing the GCC size. We comprehensively evaluate the performance of MaxShot using two metrics: the GCC size and the innovative dual metric proposed by us, which is shown in Equation (
1).
Figure 2 and
Figure 3 respectively display the test results of MaxShot and other baseline methods, including HDA, HBA, HCA, and HPRA, on the GCC size and the maximum degree of GCC size for 100 BA graphs. From these figures, MaxShot not only surpasses other baseline methods in terms of the traditional metric of GCC size but also demonstrates equally impressive performance in the maximum degree of GCC size.
5.3. Results on Real-World Dataset
Furthermore, to showcase the performance of MaxShot, we conducted experiments on four real-world datasets and plotted the ANC curves and the maximum degree ANC curves during the removal process, as presented in
Figure 4 and
Figure 5. From these figures, it can be observed that the introduction of the maximum degree effectively mitigates the early slow-down issue of traditional disintegration strategies, while also maintaining a small area under the ANC curve.
5.4. Other Analysis of MaxShot
5.4.1. Convergence of MaxShot
We visualize the GCC size and the maximum degree GCC size of MaxShoton a validation set of 100 BA graphs with the same distribution during the training process, as shown in
Figure 6 and
Figure 7, respectively. It is not difficult to see from these figures that MaxShotis able to maintain optimization of the GCC size while simultaneously optimizing the maximum degree GCC size.
5.4.2. Running time of different methods
The task of network dismantling not only requires effective dismantling but also pays attention to its runtime.
Table 3 and
Table 4 present the runtime of various methods on synthetic and real-world datasets. Compared to other methods, MaxShot also has a significant advantage in runtime, especially on large-scale real-world datasets.
6. Conclusion
In this paper, we introduce MaxShot, a cutting-edge algorithm that integrates graph representation learning with reinforcement learning to address the network dismantling challenge by minimizing a targeted dual metric that prioritizes high-degree nodes within the Giant Connected Component (GCC). Leveraging a sophisticated encoder-decoder architecture, MaxShot effectively translates graph structures into dense representations using GraphSAGE and then applies Double DQN to refine the node selection process.
We have conducted extensive experiments on both synthetic and real-world datasets, demonstrating that MaxShot surpasses existing state-of-the-art methods in performance. Moreover, MaxShot exhibits remarkable computational efficiency, achieving faster run times on several datasets. The incorporation of Double DQN enhances the decision-making process, leading to more strategic and effective node removals.
Looking forward, the success of MaxShot signals a significant advancement in harnessing graph neural networks and reinforcement learning for network dismantling and related optimization tasks. Future research will explore the adaptability of MaxShotâs approach to a variety of graph-based problems and aim to integrate additional graph-level features to further enhance its performance.
Author Contributions
Conceptualization, methodology, formal analysis, L.S.; investigation, resources, T.P.; data curation, visualization, L.Z.; writing, original draft preparation and writing, review, and editing, Y.W., H.C. and Z.L. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
The data presented in this study are available on request from the corresponding author.
Acknowledgments
In this section you can acknowledge any support given which is not covered by the author contribution or funding sections. This may include administrative and technical support, or donations in kind (e.g., materials used for experiments).
Conflicts of Interest
The authors declare no conflicts of interest. The funders had no involvement in the design of the study, the collection, analysis, or interpretation of data, the writing of the manuscript, or the decision to publish the results.
References
- Vespignani, A. Twenty years of network science. Nature 2018, 558, 528–529. [Google Scholar] [CrossRef]
- Gosak, M.; MarkoviÄ, R.; DolenÅ¡ek, J.; Rupnik, M.S.; Marhl, M.; Stožer, A.; Perc, M. Network science of biological systems at different scales: A review. Physics of life reviews 2018, 24, 118–135. [Google Scholar] [CrossRef] [PubMed]
- Boccaletti, S.; Latora, V.; Moreno, Y.; Chavez, M.; Hwang, D.U. Complex networks: Structure and dynamics. Physics reports 2006, 424, 175–308. [Google Scholar] [CrossRef]
- Cavelty, M.D.; Wenger, A. Cyber security meets security politics: Complex technology, fragmented politics, and networked science. Contemporary Security Policy 2020, 41, 5–32. [Google Scholar] [CrossRef]
- Veksler, V.D.; Buchler, N.; Hoffman, B.E.; Cassenti, D.N.; Sample, C.; Sugrim, S. Simulations In Cyber-Security: A Review Of Cognitive Modeling Of Network Attackers, Defenders, And Users. Frontiers in psychology 2018, 9, 691. [Google Scholar] [CrossRef]
- Wandelt, S.; Lin, W.; Sun, X.; Zanin, M. From random failures to targeted attacks in network dismantling. Reliability Engineering System Safety 2022, 218, 108146. [Google Scholar] [CrossRef]
- Pastor-Satorras, R.; Vespignani, A. Immunization of complex networks. Physical review E 2002, 65, 036104. [Google Scholar] [CrossRef] [PubMed]
- Siegelin, M.D.; Plescia, J.; Raskett, C.M.; Gilbert, C.A.; Ross, A.H.; Altieri, D.C. Global targeting of subcellular heat shock protein-90 networks for therapy of glioblastoma. Molecular cancer therapeutics 2010, 9, 1638–1646. [Google Scholar] [CrossRef] [PubMed]
- Braunstein, A.; Dall’Asta, L.; Semerjian, G.; Zdeborová, L. Network dismantling. 2016. abs/1603.08883.
- Addis, B.; Summa, M.D.; Grosso, A. Identifying critical nodes in undirected graphs: Complexity results and polynomial algorithms for the case of bounded treewidth. Discrete Applied Mathematics 2013, 161, 2349–2360. [Google Scholar] [CrossRef]
- Detecting critical nodes in sparse graphs. Computers & Operations Research 2009, 36, 2193–2200.
- Li, H.; Shang, Q.; Deng, Y. A Generalized Gravity Model For Influential Spreaders Identification In Complex Networks. Chaos, Solitons Fractals 2021, 143, 110456. [Google Scholar] [CrossRef]
- Fan, C.; Zeng, L.; Ding, Y.; Chen, M.; Sun, Y.; Liu, Z. Learning to Identify High Betweenness Centrality Nodes from Scratch: A Novel Graph Neural Network Approach. 2019. abs/1905.10418, 559–568.
- Zeng, L.; Fan, C.; Chen, C. Leveraging Minimum Nodes for Optimum Key Player Identification in Complex Networks: A Deep Reinforcement Learning Strategy with Structured Reward Shaping. Mathematics 2023, 11, 3690. [Google Scholar] [CrossRef]
- Fan, C.; Zeng, L.; Sun, Y.; Liu, Y.Y. Finding key players in complex networks through deep reinforcement learning. Nature machine intelligence 2020, 2, 317–324. [Google Scholar] [CrossRef] [PubMed]
- Crucitti, P.; Latora, V.; Marchiori, M.; Rapisarda, A. Error and attack tolerance of complex networks. Physica A: Statistical mechanics and its applications 2004, 340, 388–394. [Google Scholar] [CrossRef]
- Valdez, L.D.; Shekhtman, L.; Rocca, C.E.L.; Zhang, X.; Buldyrev, S.; Trunfio, P.A.; Braunstein, L.A.; Havlin, S. Cascading failures in complex networks. Journal of Complex Networks 2020, 8, cnaa013. [Google Scholar] [CrossRef]
- Moore, T.J.; Cho, J.H.; Chen, I.R. Network Adaptations under Cascading Failures for Mission-Oriented Networks. IEEE Transactions on Network and Service Management 2019, 16, 1184–1198. [Google Scholar] [CrossRef]
- Weinbrenner, L.T.; Vandré, L.; Coopmans, T.; Gühne, O. Aging and Reliability of Quantum Networks. Physical Review A 2023, 109. [Google Scholar] [CrossRef]
- Perez, I.A.; Porath, D.B.; Rocca, C.E.L.; Braunstein, L.A.; Havlin, S. Critical behavior of cascading failures in overloaded networks. Physical Review E 2024, 109, 034302. [Google Scholar] [CrossRef]
- Addis, B.; Summa, M.D.; Grosso, A. Identifying critical nodes in undirected graphs: Complexity results and polynomial algorithms for the case of bounded treewidth. Discrete Applied Mathematics 2013, 161, 2349–2360. [Google Scholar] [CrossRef]
- A generalized gravity model for influential spreaders identification in complex networks. Chaos, Solitons & Fractals 2021, 143, 110456.
- Fan, C.; Zeng, L.; Feng, Y.; Cheng, G.; Huang, J.; Liu, Z. A novel learning-based approach for efficient dismantling of networks. International Journal of Machine Learning and Cybernetics 2020, 11, 2101–2111. [Google Scholar] [CrossRef]
- Grassia, M.; Domenico, M.D.; Mangioni, G. Machine learning dismantling and early-warning signals of disintegration in complex systems. Nature communications 2021, 12, 5190. [Google Scholar] [CrossRef] [PubMed]
- Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A.A.; Veness, J.; Bellemare, M.G.; Graves, A.; Riedmiller, M.; Fidjeland, A.K.; Ostrovski, G.; Petersen, S.; Beattie, C.; Sadik, A.; Antonoglou, I.; King, H.; Kumaran, D.; Wierstra, D.; Legg, S.; Hassabis, D. Human-level control through deep reinforcement learning. Nature 2015, 518, 529–533. [Google Scholar] [CrossRef] [PubMed]
- Deisenroth, M.P.; Rasmussen, C.E. PILCO: A Model-Based and Data-Efficient Approach to Policy Search. International Conference on Machine Learning, 2011, pp. 465–472.
- Hu, J.; Wellman, M.P. Nash q-learning for general-sum stochastic games. Journal of machine learning research 2003, 4, 1039–1069. [Google Scholar]
- Heinrich, J.; Silver, D. Deep Reinforcement Learning from Self-Play in Imperfect-Information Games. arXiv 2016, arXiv:1603.01121. [Google Scholar]
- Kitsak, M.; Ganin, A.A.; Eisenberg, D.A.; Krapivsky, P.L.; Krioukov, D.; Alderson, D.L.; Linkov, I. Stability of a giant connected component in a complex network. Physical Review E 2018, 97, 012309. [Google Scholar] [CrossRef]
- Dorogovtsev, S.N.; Mendes, J.F.F.; Samukhin, A.N. Giant strongly connected component of directed networks. Phys Rev E Stat Nonlin Soft Matter Phys 2001, 64, 025101. [Google Scholar] [CrossRef]
- Kipf, T.N.; Welling, M. Semi-Supervised Classification with Graph Convolutional Networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
- Ying, R.; He, R.; Chen, K.; Eksombatchai, P.; Hamilton, W.L.; Leskovec, J. Graph Convolutional Neural Networks for Web-Scale Recommender Systems. ACM 2018. [Google Scholar] [CrossRef]
- Schlichtkrull, M.; Kipf, T.N.; Bloem, P.; Berg, R.V.; Welling, M. Modeling Relational Data with Graph Convolutional Networks; Springer: Cham, 2018. [Google Scholar]
- Hu, Z.; Dong, Y.; Wang, K.; Chang, K.W.; Sun, Y. GPT-GNN: Generative Pre-Training of Graph Neural Networks. 2020. [CrossRef]
- Joshi, D.J.; Kale, I.; Gandewar, S.; Korate, O.; Patwari, D.; Patil, S. Reinforcement learning: A survey. Machine Learning and Information Processing: Proceedings of ICMLIP 2020. Springer, 2021, pp. 297–308.
- Sutton, R.S.; Barto, A.G. Reinforcement learning: An introduction; MIT press, 2018.
- Ladosz, P.; Weng, L.; Kim, M.; Oh, H. Exploration in Deep Reinforcement Learning: A Survey. Information Fusion 2022, 85, 1–22. [Google Scholar] [CrossRef]
- Hamilton, W.L.; Ying, R.; Leskovec, J. Inductive Representation Learning on Large Graphs. 2017.
- Hasselt, H.V. Double Q-learning. OAI 2010. [Google Scholar]
- Hooshmand, F.; Mirarabrazi, F.; MirHassani, S. Efficient benders decomposition for distance-based critical node detection problem. Omega 2020, 93, 102037. [Google Scholar] [CrossRef]
- Carmi, S.; Havlin, S.; Kirkpatrick, S.; Shavitt, Y.; Shir, E. A model of Internet topology using k-shell decomposition. Proceedings of the National Academy of Sciences 2007, 104, 11150–11154. [Google Scholar] [CrossRef]
- Wandelt, S.; Sun, X.; Feng, D.; Zanin, M.; Havlin, S. A comparative analysis of approaches to network-dismantling. Scientific reports 2018, 8, 13513. [Google Scholar] [CrossRef]
- Bavelas, A. Communication patterns in task-oriented groups. The journal of the acoustical society of America 1950, 22, 725–730. [Google Scholar] [CrossRef]
- Morone, F.; Makse, H.A. Influence maximization in complex networks through optimal percolation. Nature 2015, 524, 65–68. [Google Scholar] [CrossRef]
- Braunstein, A.; DallâAsta, L.; Semerjian, G.; Zdeborová, L. Network dismantling. Proceedings of the National Academy of Sciences 2016, 113, 12368–12373. [Google Scholar] [CrossRef] [PubMed]
- Zdeborová, L.; Zhang, P.; Zhou, H.J. Fast and simple decycling and dismantling of networks. Scientific reports 2016, 6, 37954. [Google Scholar] [CrossRef] [PubMed]
- Mugisha, S.; Zhou, H.J. Identifying optimal targets of network attack by belief propagation. Physical Review E 2016, 94, 012305. [Google Scholar] [CrossRef]
- Ren, X.L.; Gleinig, N.; Helbing, D.; Antulov-Fantulin, N. Generalized network dismantling. Proceedings of the national academy of sciences 2019, 116, 6554–6559. [Google Scholar] [CrossRef]
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).