- Autonomous Intelligent Agents: Autonomous intelligent agents operate independently in distributed computing environments, analyzing data from a data pool and mitigating adversarial attacks on DNNs by replacing poisoned data with benign data.1
- Deep Neural Networks and Adversarial Attacks: DNNs are susceptible to adversarial attacks, where malicious inputs cause misclassification. Various attack methods include Fast Gradient Sign Method (FGSM), Jacobian-based Saliency Map Attack (JSMA), and Carlini & Wagner (C&W) Attack.23
- Generative and Stochastic Neural Networks: Generative Neural Networks (GNNs) and Stochastic Neural Networks (SNNs) are employed by agents to detect and mitigate adversarial data. GNNs generate new data, while SNNs add noise to data evaluations to determine proximity to decision boundaries.45
- Trustworthiness and Byzantine Fault Tolerance: Agents monitor each other's behavior and assign trustworthiness scores to enhance Byzantine fault tolerance in distributed systems, ensuring reliable operation even in the presence of malicious agents.67
- Specific Improvements in Computer Capabilities: The document emphasizes specific improvements in computer capabilities, particularly in enhancing the functionality and efficiency of computer processors through the use of intelligent agents.89
- Applications and Examples: Examples include the use of GNNs for grammatical checks and data cleaning in large language models, and the use of SNNs for identifying data near decision boundaries to prevent poisoning.1011
- Goodfellow, I. J., Shlens, J., & Szegedy, C. (2015). Explaining and harnessing adversarial examples.?International Conference on Learning Representations (ICLR).
- Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z. B., & Swami, A. (2016). The limitations of deep learning in adversarial settings.?IEEE European Symposium on Security and Privacy (EuroS&P).
- Carlini, N., & Wagner, D. (2017). Towards evaluating the robustness of neural networks.?IEEE Symposium on Security and Privacy (SP).
- Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., & Fergus, R. (2014). Intriguing properties of neural networks.?International Conference on Learning Representations (ICLR).
- Kurakin, A., Goodfellow, I., & Bengio, S. (2017). Adversarial examples in the physical world.?International Conference on Learning Representations (ICLR).
- Biggio, B., & Roli, F. (2018). Wild patterns: Ten years after the rise of adversarial machine learning.?Pattern Recognition, 84, 317-331.
- Madry, A., Makelov, A., Schmidt, L., Tsipras, D., & Vladu, A. (2018). Towards deep learning models resistant to adversarial attacks.?International Conference on Learning Representations (ICLR).
- Tramer, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., & McDaniel, P. (2018). Ensemble adversarial training: Attacks and defenses.?International Conference on Learning Representations (ICLR).
- Athalye, A., Carlini, N., & Wagner, D. (2018). Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples.?International Conference on Machine Learning (ICML).
- Moosavi-Dezfooli, S. M., Fawzi, A., & Frossard, P. (2016). DeepFool: A simple and accurate method to fool deep neural networks.?IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
- Su, J., Vargas, D. V., & Sakurai, K. (2019). One pixel attack for fooling deep neural networks.?IEEE Transactions on Evolutionary Computation, 23(5), 828-841.
- Yuan, X., He, P., Zhu, Q., & Li, X. (2019). Adversarial examples: Attacks and defenses for deep learning.?IEEE Transactions on Neural Networks and Learning Systems, 30(9), 2805-2824.
- Zhang, H., Yu, Y., Jiao, J., Xing, E., Ghaoui, L. E., & Jordan, M. (2019). Theoretically principled trade-off between robustness and accuracy.?International Conference on Machine Learning (ICML).
- Xie, C., Wang, J., Zhang, Z., Ren, Z., Yuille, A., & Lim, S. N. (2019). Adversarial examples improve image recognition.?IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
- Zhang, J., & Wang, X. (2020). Defense against adversarial attacks using feature scattering-based adversarial training.?IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
- Dong, Y., Liao, F., Pang, T., Su, H., Zhu, J., Hu, X., & Li, J. (2018). Boosting adversarial attacks with momentum.?IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
- He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition.?IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
- Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks.?IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
- Ilyas, A., Engstrom, L., Athalye, A., & Lin, J. (2019). Adversarial examples are not bugs, they are features.?Advances in Neural Information Processing Systems (NeurIPS).
Researcher | Disruptor | Author | Pilot
6 天前Love to see it. If we leverage Generative and Stochastic Neural Networks, and these agents replaced poisoned data to reinforce trustworthiness through Byzantine fault tolerance. I think comms is a huge piece to the puzzle. AI, built on tensor-based comms and open-source frameworks, must be integrated strategically into our machines and operations. That way (they) [new favorite pronoun for ai lol..] can communicate with one another. Human's have a hard time communicating as it is. We don't need our ai machines distributing languages and not being cohesive or even incompetent. Understanding its ecosystem behavior logic is crucial. Riley check this one...