Backdoor Attack on DNN
A time series is a collection of information and observations made at regular intervals over a given period. It ranks among the most popular data types across various industries. Deep neural networks have successfully established themselves in various fields and institutions. In addition to computer vision, it has become an essential component of time series analysis. Security for the same has been a concern at such an exponential rate. They frequently relied on third-party sources because a sizable number of datasets are required for DNN training, which raised the risk of potential dangers. These dangers left them open to several attacks, with backdoor attacks currently dominating the headlines.
Backdoor Attacks:
Backdoor attacks are the ones where the adversary looks for the target machine, and then chooses the trigger which will be injected into the model during the training phase. The model will act casually on clean inputs but will give adversary-driven outputs when trigger-based or poisoned data inputs are served. There are two types of backdoor attacks: clean label and poison label.
Data poisoning alters the label of the target class. It can change the training data label as well, making it possible to control both the training data and the process, in contrast to clean label attacks, which do not affect the labeled training. These attacks are difficult to identify, but they could be avoided by the filtration process, which involves eliminating the data that has been tainted.
During backdoor training, poisoned data can be made by either integrating paraphrased samples with predefined syntactic sentences in the case of written data or by integrating poisoned image samples with a trigger into the original image. All the triggers contain adversary-defined target labels and trigger outputs. Such integrations need to be invisible to prevent the detection of backdoor triggers. For instance, mask the poisoned samples of photos into the source image using noise and effects. Generate paraphrased paragraphs that can be placed into original phrases using a complete text generator. However, such reliable trigger patterns are simple to find.
Researchers have found yet another type of nature-inspired backdoor pattern known as Reflection. The DNN model is sensitive to triggered backdoor attacks created by manipulating reflections. Since it just slightly diverts the focus of the model of the original region, this is said to be more subtle than others.
Data poisoning v/s backdoor attacks:
There's a difference between data poisoning and Backdoor attacks. Where data poisoning impacts the effectiveness of clean samples, backdoor attacks aim to let the model act as a benign model but act according to the adversary when met with trigger-inserted data.?
Data tampering is immediately identifiable, and since the adjustments are not predetermined, they only take effect during the inference process. However, backdoor attacks manipulate both the label and training process. They have predefined and consistent modifications for the target samples. Thus, they are hard to find.?
领英推荐
Trigger Patterns and Time Series:
Backdoor attacks strive for high levels of stealth and attack success, which depends on the assault vector's trigger patterns. Static and dynamic trigger patterns make up the majority of the available types. However, time series have lower dimensions and even less degree of freedom. Fixed patterns will be more observant, making the time-series model prone to backdoor attacks. There's a diverse class of time series. They are seen everywhere, such as in the heart rate, the market, and the weather. Images are another story as they have sparsely distributed patterns.?
Additionally, universal generators are obtained after training trigger patterns used on other time series. It will provide high flexibility for adversaries to perform backdoor attacks. It demonstrates the serious threat to the time series model.
Probability and Prevention:?
Backdoor attacks are more likely to occur on systems where users get pre-trained models from unauthorized sources. Since they have no effect on the performance of the regular inputs but can alter the outputs of the triggered samples, this is challenging to identify. Researchers offer a variety of defenses against backdoor intrusions and protection for DNN models. Here are a few of them:
2. To identify and eliminate hostile neurons and thwart backdoor attacks, various fine-pruning strategies are presented.
3. Backdoor attacks can be mitigated by de-noising and varying training methods.
Conclusion:
Deep Neural Networks are an integral part of AI, and so is the time series model. However, they possess many loopholes that need to be fixed. Attackers can exploit and take advantage of them to insert traps in the form of backdoor attacks. Researchers have suggested some precautions and adjustments that need to be made to such time series models. But, this has proved that such advancements in technology also have setbacks that are easy to deploy. They need to be taken care of before they can exploit the services they are providing.