Keywords

Network Intrusion Detection System (NIDS) [1, 2] are playing an important role in cybersecurity for detecting malicious network traffic. NIDS uses signature or anomaly based detection to identify cyber-attacks. However with the growth of network traffic and attacks diversity [3], signature detection which can only detect existing attacks by using their signatures are being replaced by anomaly based detection which have potentially the capabilities to detect existing attacks as well as novel attacks. Among the various techniques applied to implement NIDS, Machine Learning (ML) based have been the predominant method and have seen a fast adoption due to their abilities to discriminate between abnormal and normal pattern over a data set. In the last decades there has been a wide research that apply machine learning (including deep learning) in NIDS settings [4, 23, 24]. However, serious security issues are now emerging with the discovery of the vulnerability of these algorithms [8].

Researchers [5,6,7,8] have shown that machine learning can be easily fooled when adding some perturbation during its training or prediction phase. These perturbations are called adversarial samples and they are specially crafted inputs that cause the learning model to wrongly classify/predict an input. For instance attackers can exploit the vulnerability of voice control system and influence the model to make wrong decision on recognizing voice command. In autonomous vehicles based machine learning the attacker can trick the model to make wrong decision on recognizing the traffic signs [10]. In intrusion detection the attacker might influence the classifiers to misclassify the attack traffic as benign and then bypass the security system. Failure in this critical cybersecurity area could compromise the security of an entire system. Then it is actually the security-critical area that face the biggest challenges from these threats [11].

Considering the limited reviews targeting the adversarial attacks against network intrusion detection system and the numerous papers being published recently, in this survey we aims to provide a comprehensive overview of the evolution of the works provided in this area with the following contributions:

  1. 1.

    We summarize and analyze the recent advance on adversarial machine learning applied to NIDS

  2. 2.

    By analyzing and comparing the different works proposed, we discuss open issues that can help as future direction in this evolving area.

The remainder of this paper is organized as follow: In Sect. 1 we discuss previous related works. Section 2 we discuss the background of basic concept on machine learning and adversarial attack taxonomy. In Sect. 3 we discuss the adversarial attack applied in NIDS. Section 4 discusses the adversarial defenses. Finally we propose some future direction in Sect. 5 and conclude in Sect. 6.

1 Related Work

Related works have been presented in [11, 12]. In [12], authors worked on a review of adversarial machine learning in intrusion and malware detection. However they provided limited review on researches related to NIDS and mainly focused on the evasion attack and in white box scenario. Moisejevs et al. [11] provided an overview of adversarial attacks and defenses in intrusion detection. They attempted to focus on evasion and poisoning attacks in white box and black box scenario. However similar to [12], limited papers were reviewed and the most recent was in 2018. Recently there has been an increasing number of publications in adversarial machine learning [13] including applied in NIDS. The literature survey we provide differ from the previous in many ways. We include the more recent works. In addition we review all adversarial machine learning scenario in NIDS including black box and white box and applied during training time (poisoning attack) or during test time (evasion attack). More details of these techniques will be discussed in Sect. 2.

To prepare this survey, studies are selected on multiple databases such as Springer and Elsevier, IEEE, Research Gate and Science Direct using keywords “Intrusion detection”, “Adversarial Machine learning”, “Adversarial Deep Learning”. We survey a total of 29 papers that works on adversarial attacks or defense technique.

2 Background

In this section, we discuss the basic concept of machine learning and adversarial attack.

2.1 Machine Learning in NIDS

Machine learning is a part of artificial intelligence (AI) with a multidisciplinary research area that spans several fields. These fields include probability and statistics, computer science, algorithms, psychology and brain science. There are four (4) approaches used in Machine learning such as Supervised learning, Unsupervised learning, Semi-supervised learning, and Reinforcement learning. However Supervised and unsupervised learning are the most common type used in NIDS [14]. Machine learning models are mainly divided into shallow or traditional model and deep learning model. The most common traditional ML models applied in IDS include support vector machine (SVM), decision tree (DT), random forest (RF), k-means, artificial neural network (ANN), and ensemble method [1, 14]. Recently Deep learning (DL) methods have greatly improved NIDS by overcoming the difficulty of feature selection and representation. The number of published works on DL based NIDS has rapidly increased [4, 14]. The common DL models applies in NIDS include recurrent neural networks (RNNs), long short-term memory (LSTM) networks, convolutional neural network (CNN), AutoEncoder (AE), Deep neural network (DNN), Deep belief network (DBN).

2.2 Adversarial Machine Learning

Adversarial attacks represent a major limitation for the adoption of machine learning in many area. These attacks against the machine learning algorithms are security threats that aim to trick the learning model by purposely adding tiny perturbations to the data to easily subvert their predictions. This phenomenon has been explored for more than a decade in the traditional machine learning [25]. However the discovery of these adversarial examples against neural networks, by Szegedy et al. [8] and in subsequently [5, 26, 27], has renewed interest in the AI community [25].

These perturbations against the learning algorithms can be performed in mainly during the training time or test time. In the training time also called poisoning attack [28] the attacker alter the input data to induce wrong model prediction. This technique is performed with data manipulation, data injection or logic corruption [29]. In test-time called evasion attack [30], the attacker aims to evade the trained model by tricking the input data.

2.3 Modeling the Attack Scenario

Huang et al. classified these threats on the basis of three (3) axes: the influence on the classifiers, specificity and security violation (or impact). This taxonomy has been further studied by Biggio et al. [15], to model the attack scenario for a comprehensive understanding of the attacker strategy. According to [15], the attack scenario can be modeled based on the attacker’s goal, knowledge, capability and strategy.

Adversary’s Goal: This goal defines which security violation (Integrity, Availability and Privacy), the attacker aims to target and its specificity which mean if the attack is targeted or untargeted. It can be categorized in 3 types:

  • Integrity violation that occurs when the adversary attempts to evade the detector. For instance, the attacker may aim to misclassify malicious sample as benign and result in an increase of false negative.

  • Avaibility violation which leads to a useless system by creating many misclassifications. Thus increasing the false negative and false positive rate.

  • Privacy violation in which the attacker try to get information from the learner.

In term of deep learning, papernot et al. [35] define the integrity violation as primary adversary’s goal.

Adversary’s Knowledge: This describes how well the attacker knows his target. Depending on the type of information there are three types of knowledge: white box, gray box and black box.

  • White box: It assumes the adversary has complete information related to the network model: training data, features, learning algorithm, as well as trained model.

  • Grey box: It assumes the attacker has partial knowledge about the target. This is also called the semi-white box.

  • Black box: It assumes the attacker has zero or limited knowledge about the target. The attacker only knows the output of the model

Adversary’s Capability: It assumes the types of influence the adversary can perform against the target.

Adversary’s Strategy: It determines the workflow pursued by the adversary to launch the attack. The attack can be performed during the training time (poisoning attack) or during the test time (evasion attack).

3 Adversarial Attack Against NIDS

In this section, we review different studies that applied the adversarial machine learning in network intrusion detection system (NIDS) domain. As mentioned in Sect. 2 the attack can be performed during the training time called Poisoning attack or during test time called Evasion. We will review both evasion and poisoning attack and note down if the attack is performed in black box of white box scenario where possible.

3.1 Poisoning Attacks

Data Manipulation: Ali et al. [37] performed poisoning attack on DNN based IDS for a SDN-compliant heterogeneous wireless communication network. Launched in a white box using relabeling techniques in which malicious traffic is labeled as benign and normal traffic as malicious. Results show that the proposed poisoning attack decrease significantly the DNN classifier performance.

Papadopoulos et al. [33] Performed a label flipping attack in a white box to attack a SVM based NIDS for IoT environment. The method significantly degrade the model performance.

3.1.1 Data Injection:

Nguyen et al. [61] propose a backdoor against federated learning based IoT NIDS. The adversary inject gradually on the compromised devices small amount of malicious data in the normal traffic during the training model. As a result they successfully reduce the model accuracy.

3.2 Evasion Attack

  1. (a)

    Adversarial Deep Learning Against Intrusion Detection Classifiers: Rigaki et al. [40] investigate a targeted and untargeted gray box attack against RF, SVM, DT and their Majority ensemble voting. They generated adversarial sample with FGSM and JSMA on a multilayer perceptron (MLP) model and then transferred [19]. All classifier were affected, with the SVM being the most vulnerable and RF being the most robust. They analyzed the effect of the FGSM and JSMA. Concluded FGSM modified all features whereas JSMA alter only 6% of the feature. This make the JSMA more realistic.

  2. (b)

    Deep Learning-Based Intrusion Detection With Adversaries: Wang et al. [20] performed a white box attack against MLP assessed on NSLKDD dataset. They generated adversarial examples with JSMA, FGSM, DEEPFOOL and CW. All attacks successfully degrade the performance of the MLP classifier, with the CW less devastating. They noticed that JSMA attack can achieve 100% probability of fooling the model with very less features.

  3. (c)

    Adversarial Attack against LSTM-based DDoS Intrusion Detection System: Huang et al. [16] propose the first study on adversarial LSTM-based DDoS detection under black box setting. They utilized genetic Algorithm (GA) and Probability Weighted Packet Saliency Attack (PWPSA), to generate adversarial samples. In their experiment Both methods can fool the detector with high success rates.

  4. (d)

    Adversarial Machine Learning in Network Intrusion Detection Systems: Alhajjar et al. [9] generate adversarial examples to evade 11 machine learning models (SVM, DT, NB, KNN, RF, MLP, GB, LR, LDA, QDA, BAG). They Explore the use of GAN, and evolutionary algorithms: particle swarm optimization (PSO) and genetic Algorithm (GA) as adversarial examples. Use the Monte Carlo (MC) simulation as baseline and transfer the attacks. The authors consider the constrained nature of the feature space in NIDS and design these algorithms to perturb the inputs without modifying the malicious functionality of the networks. The experiment results show these perturbations were able to fool all models with a high misclassification rate. SVM and DT were the most vulnerable.

  5. (e)

    Adversarial Attacks Against NIDS in IoT Systems: Qiu et al. [21] propose a realistic and efficient novel adversarial attack method against DNN model in NIDS for IoT in a black box environment. Their proposed approach uses the model extraction technique to reproduce target model for crafting adversarial examples and with a small portion of the original train data to achieve a high efficiency. Subsequently, to identify the most significant feature that influence the detector with the least modifications, a saliency maps [22] is used. Then generate perturbations using the FGSM adversarial sample. The method is applied to target Kitsune, a NIDS for IoT. The experimental results show the attacker can successfully compromise the detection system with an average success rate of 94.31%.

  6. (f)

    Launching Adversarial Attacks against Network Intrusion Detection Systems for IoT: Papadopoulos et al. [33] Performed a white box adversarial attack against both traditional machine learning and deep learning model to evaluate their robustness in NIDS for IoT. In their methodology, they studied both poisoning and evasion attack. The evasion is performed with the FGSM against an ANN based IDS implemented with Bot-IoT dataset. The experiment result show a significant performance degradation. Moreover, authors mentioned traditional machine learning are more vulnerable during training time. Therefore the poisoning attack is performed on SVM model with the label flipping method.

  7. (g)

    Adversarial Attacks to bypass a GAN based classifier trained to detect Network intrusion: Piplai et al. [31] studied the effectiveness of adversarial attacks against adversarial training. They revealed that even training the model with an adversarial training method, the attacker can still fool the model. Adversarial training is a defense technique that aims to increase the robustness of the model against adversarial attacks.

  8. (h)

    Black-Box Attack Method against Machine-Learning-Based Anomaly Network Flow Detection Models: Similarly to [9], Guo et al. [32] analyzed the constrained domain on adversarial attacks against NIDS. They performed a black box attack with limited number of query. An extension of BIM adversarial sample is used to craft adversarial sample in a substitute MLP model in a white box setting. Then used the transferability to achieve the black box attack. The method is evaluated on KDD99 and CICIDS2018 dataset. On KDD99, they targeted SVM, MLP, KNN, and CNN. Subsequently three model were targeted on CICIDS2018: Resnet, CNN and MLP. The experimental results show the proposed black box method can bypass the detector with high probability.

  9. (i)

    Adversarial Attack Against DoS Intrusion Detection: An Improved Boundary-Based Method: Peng et al. [34] studied the robustness of ANN-based DoS IDS in a black box environment. They proposed an improved boundary based method to generate the adversarial samples. The presented approach optimizes a Mahalanobis distance by influencing the feature of both continuous and discrete DoS samples. The experimental results revealed that with limited queries, their proposed method can craft adversarial DoS examples and bypass the detection model.

  10. (j)

    A Brute-Force Black-Box Method to Attack Machine Learning-Based Systems in Cybersecurity: Zhang et al. [36] propose a brute-force attack method (BFAM) to generate adversarial examples. The BFAM overcome some limit of GAN such as the unstable training [7]. They targeted LR, DT, MLP, naive Bayes (NB) and RF. Experimental results show that the proposed BFAM method is computational efficient and outperforms adversarial attack method based on GAN. However, RF has been the most resilient classifier to the generated adversarial example.

  11. (k)

    Generative adversarial attacks against intrusion detection systems using active learning: Shu et al. [41] propose GAN active learning (Gen-AAL) to compromise the ML based NIDS in a black box with limited training data. In the GAN model the Variational AutoEncoder (VAE) is proposed as the generator and the discriminator is a MLP to implement a substitute model which approximate the target model. The active learning is used to decrease the number of required label to train the model. The experimental results show the proposed method achieve an evasion success rate of 98% by only using 25 labels instance during the training.

  12. (l)

    Evading a Machine Learning-based Intrusion Detection System through Adversarial Perturbations: Fladby et al. [42] investigate an evasion attack against stratosphere linux ips (Slips) in a gray box setting. Slips is a ML-based Network Behavioral Analysis (NBA) which use the Markov chains algorithms. In the proposed method, authors use a custom attack to target the property network flow periodicity. The simultaneous perturbation stochastic approximation (SPSA) optimization method is used to perturb the network flows with minimal magnitude. Experimental results show the proposed method was able to evade the detector.

  13. (m)

    Evaluating Deep Learning Based Network Intrusion Detection System in Adversarial Environment: Peng et al. [48] evaluate the robustness of four ML based NIDS under adversarial attack: RF, Logistic regression, SVM, and DNN respectively. The attack are performed with four adversarial samples: Projected Gradient Descent attack (PGD), Momentum Iterative FGSM (MI-FGSM), L-BFGS attack, and Simultaneous Perturbation Stochastic Approximation (SPSA). All models performance sharply decrease and with the MI-FGSM attack achieving the highest attack success rate.

  14. (n)

    Analyzing Adversarial Attacks against Deep Learning for Intrusion Detection in IoT Networks: Ibitoye et al. [49] investigate a white box attack against NIDS in IoT network. Two deep learning models have been first used to implement the NIDS; Feedforward Neural Networks (FNN) and its variant Self-normalizing Neural Network (SNN). Then the models resilience are evaluated. The adversarial samples are generated with FGSM and two of its variant: BIM and PGD. Both model performance degraded, however the SNN has been more resilient than the FNN. Moreover, authors found that feature normalization make the model vulnerable to adversarial sample.

  15. (o)

    Evaluating Deep Learning-based NIDS in Adversarial Settings: Mohammadian et al. [50] investigated the effect of features and their vulnerability in a white box evasion attack. The approach targets an IDS implemented with DNN and utilizes a FGSM to generate attack. The attack was assessed on two datasets: CICIDS2017 and CIC-DDoS2019. To evaluated the most suitable feature for generating adversarial sample, they group features into different categories based on their nature. Then they craft adversarial sample in different feature set. The experiments show there are no general conclusion regarding the most vulnerable feature in both dataset.

  16. (p)

    NIDSGAN: Zolbayar et al. [51] studied the effectiveness of GAN against ML based NIDS. They introduce NIDSGAN, an attack algorithm that generate adversarial network traffic to fool the IDS in a white-box, black-box and restricted black box evasion attacks. The approach take into account the domain constraints in network traffic to develop a realistic attack. In the proposed method, GAN is associates with active learning. The active learning method is used to decrease the training data size and enhance the attack success rate and GAN generates the attack. The attack is evaluated in two DNN models: AlertNet [52] and DeepNet [53]. The experimental results show the proposed method can evade the detector with a success rate of 99% in white box, 85% in black box and 70% in restricted black box.

  17. (q)

    A Comparative Study on Contemporary Intrusion Detection Datasets: Pacheco et al. [18] evaluate the effectiveness of adversarial examples against the UNSB-NB15 and Bot-IoT datasets. Four NIDS target model were implemented using MLP, DT, RF and SVM. The attacks are performed in a white box with three adversarial sample generations: JSMA, FGSM and CW. The findings results demonstrate all models performance were degraded with RF beign the most resilient and SVM being the most vulnerable. And the JSMA attack has been the least effective in both datasets.

  18. (r)

    Black Box Attacks on Deep Anomaly Detectors: Kuppa et al. [54] propose a realistic black box attack with limited queries to evade the detector. In the proposed approach, the Mani fold Approximation Algorithm is applied to the target model and is used to minimize the query. Then adversarial samples are generated with the spherical local subspaces. They evaluate the approach on 7 NIDS model: Isolation Forests (IF), Adversarially Learned Anomaly Detection (ALAD), One Class Support Vector Machines (OC-SVM), Deep Autoencoding Gaussian Mixture Model (DAGMM), Deep Support Vector Data Description (DSVDD), AnoGAN and AutoEncoder (AE). The experiments show an attack success rate over 70%. However the proposed approach is more suitable for case where normal and attack boundaries are not well defined and when the NIDS is threshold based decision.

Table 1 summarizes the attacks method explored in this section.

Table 1. Summary of contributions in adversarial attacks against NIDS

4 Defending Against Adversarial Attacks

In this session we summarize existing works that propose a defense method against these adversarial machine on NIDS.

4.1 Defense Against Poisoning Attack

4.1.1 Data Transformation:

Poisoning attacks are generally injecting during retraining phase of the target system. Therefore, Apruzzese et al. [39] propose a data transformation which consist of inverting the training data before storing to the database. Therefore the poisoned data will not have much effect during retraining.

4.1.2 Pruning and Fine-Tuning:

Bachl et al. [58] Investigated the defense against backdoor attacks in ML based NIDS. RF and MLP models have been used to implements de NIDS in UNSW- NB15 and CIC-IDS-2017. They proposed a pruning and fine-tuning as defense method to decrease the backdoor efficacity. In their findings, authors reveal the proposed methods are efficient for random forest but not for neural network. Also they suggested Partial Dependence Plots (PDPs) and Accumulated Local Effects (ALE) plots as an efficient method to visualize backdoor attack.

4.2 Defense Against Evasion Attack

4.2.1 Adversarial Retraining

  1. (a)

    Adversarial Training for Deep Learning-based Intrusion Detection Systems: Debicha et al. [38] propose adversarial training as a defense method. The experimental findings show the adversarial training improve the robustness of the IDS against attacks. Moreover the performance of the NIDS was compared to the baseline NIDS implemented without adversarial training. However, the results finding show the adversarial training decrease the performance of the IDS accuracy in free adversarial.

  2. (b)

    Evaluation of Adversarial Training on Different Types of Neural Networks in Deep Learning-based IDSs: Khamis et al. [17] propose adversarial training based on min-max optimization as a defense technique againts adversarial attacks. To validate the method, they first evaluated three deep learning classifiers: DNN, ANN, RNN in an adversarial setting with five attack algorithms: FGSM, BIM, PGD, CW and deepfool. Assessed on NSLKDD and UNSW-NB15 datasets. All classifiers were affected in both datasets with a significant decrease of the accuracy compared to the baseline models. However the adversarial trained has significantly improved the model resilience.

  3. (c)

    GAN For Launching and Thwarting Adversarial Attacks on NIDS: Usama et al. [55] propose GAN based adversarial training. They first utilize GAN to compromise the NIDS performance in a black box setting while maintaining the functional behavior. The method was evaluated on DNN, LR, SVM, KNN, naiıve Bayes (NB), RF, DT, and gradient boosting (GB) using the KDD99 dataset as benchmark. The experimental results showed the GAN successfully evade the detector with a decrease of all performance metric. As Defense method, authors proposed GAN based adversarial training. The adversarial training has enhanced the performance.

  4. (d)

    Adversarial Attacks Against Deep Learning-Based NIDS and Defense Mechanisms: Zhang et al. [60] propose TIKI-TAKA, a framework to evaluate the robustness of deep learning based NIDS. In their approach, MLP, LSTM and CNN model based NIDS are first evaluated under adversarial attack in a black box built with five adversarial samples: Natural Evolution Strategies (NES) [43], Pointwise Attack [44], Boundary Atttack [45], OPT-Attack [46] and HopSkipJumpAttack [47]. Experiments show all models were vulnerable with an evasion success rates up to 37%. Then Three Defense methods have been proposed model voting ensembling, ensembling adversarial training, and query detection. These methods can be used jointly or separately and have been effective to decrease the success rate of evasion attacks.

4.2.2 Ensemble Model:

Debicha et al. [56] investigated the ensemble model and adversarial training as defense method. They first studied the adversarial transferability method on network traffic between Neural network and multiple traditional machine learning based NIDS and trained with two different training sets. In a white box setting using FGSM and PGD attacks. The generated adversarial samples are transferred to five traditional ML based NIDS target: SVM, LR, DT, RF, Linear Discriminant Analysis (LDA), and their ensemble model. The experimental results show the attack transferred from DNN to traditional ML can successfully decrease the accuracy of the models with the DT and RF being more resilient. As defense method, the ensemble model and adversarial training have been applied. However the ensemble model did not improve the model robustness. In contrast, the adversarial training has improved the models resilience.

4.2.3 Defensive Distillation:

Apruzzese et al. [57] introduce a variant of defensive distillation technique with RF against adversarial attack. In their approach, authors propose the use of probability labels to train the model instead of class labels applied in previous models. The experiments demonstrate the proposed method can decrease the impact of adversarial attack.

4.2.4 Feature Removal:

Apruzzese et al. [39] investigated feature removal and adversarial training. They first performed an integrity violation attack on three machine learning algorithms: MLP, RF and KNN. The attack was assessed over the CTU-13 dataset. The experiment was performed in a black box attack. In the adversarial setting scenario, a custom adversarial attack is implemented. All classifiers were severely affected. Then authors propose two defense methods against the evasion attack: the adversarial retraining and feature removal. Both defense mitigated the attack severity.

4.2.5 Graph-Structured Data:

Pujol-Perich et al. [59] propose a Graph Neural Network (GNN) based NIDS to improve the NIDS performance and its robustness against adversarial attack. The proposed GNN has been first evaluated in adversarial free and with state-of-the-art ML model based NIDS: MLP, RF, Ada-boost and decision tree ID3. The GNN model achieve a F score of 99% and is comparable to state of the art models. For the adversarial setup, two custom attacks were implemented. The first attack is implemented by increasing the packet size of attack flow. The second attack is performed by incrementing the inter-arrival time attack flow. In both attacks the GNN model has been robust as the accuracy keep the same level as in adversarial free. In contrast to the state-of -the art model which were vulnerable with a performance degradation up to 50%. Authors argue that the GNN can not only capture relevant pattern on each feature but can also seize the important structural flow pattern of attack. This ability make the GNN resilient against adversarial attack.

Table 2. Summary of contributions in adversarial defense against NIDS

5 Discussion

In the previous sessions, we explored several works that studied the adversarial machine learning in NIDS and their defenses. We can notice a yearly increase of papers, that demonstrate a growing interest on the impact of adversarial machine learning in network intrusion detection. Based on the surveyed studies, some important observations can be drawn:

  • The majority of the papers fall into a white box attack assuming the adversary has full capability and knowledge. In intrusion detection domain this assumption is not realistic. It is unlikely that an adversary get power on the model internal configuration. However, white box attack can be useful to improve the NIDS model robustness from the algorithm designer or defender’s point of view.

  • Very few papers have addressed the constraint in network traffic. Contrary to image classification and object recognition which belong to unconstrained domain, network security application belongs to constrained domain [9, 32]. The adversarial situation in network traffic is therefore quite different due to the three characteristics that we might have in the data: (1) we can have in a single feature different value (binary, categorical, continuous). (2) features in a dataset can be correlated. (3) some feature are key features and cannot be controlled by adversaries, in other word their modification might lead to a lost of critical information and therefore weaken the attack. However due to the constrained domain some feature modification might break the functionality of the network traffic. Therefore adversarial machine learning that perform well in other applications have limited success in network [9, 21]. More research is needed in this area to understand the feasability of these attacks.

  • There are not many studies on the defenses technique in NIDS. Most of studies propose an adversarial training, however adversarial training has certain limitation. They cannot detect attacks that differ from the ones in the training dataset.

  • Most of the studies focused on traditional networks. Fewer investigated these attack in IoT networks. More research is needed in IoT area. They are emerging in various contexts (e.g. federated learning), and need protection against adversaries.

6 Conclusion

Adversarial machine learning is a challenging and growing research area. Several approaches in NIDS has been presented recently. This confirm that despite the high performance of ML and DL applied in NIDSs, they are vulnerable to adversarial perturbation. This survey presents a comprehensive view of the different methodology of adversarial attacks applied against ML-based NIDS. It also discusses the different defense techniques proposed (summarized in Table 2). Furthermore, this survey addresses the limitations of the reviewed literature and outlines some directions for future work.