A New Paradigm of Active Defense System Based on Large Language Models

Main Article Content

Shujing Xiang

Keywords

large language models (LLMs), active defense, cyber security, threat detection, intelligent security

Abstract

With the continuous evolution of cyber attack technologies and the persistent expansion of attack scales, traditional passive defense systems are confronting unprecedented challenges. As a pivotal breakthrough in the field of artificial intelligence, Large Language Models (LLMs) offer a novel technical pathway for the construction of active defense systems. This paper systematically reviews LLM technologies tailored to active defense systems. Firstly, it briefly delineates the fundamental principles of LLMs. Secondly, it conducts an in-depth exploration of the core technical mechanisms of LLMs and focuses on analyzing their unique advantages in the defense domain. Subsequently, it elaborates on the technical implementation and superiorities of LLMs in key application scenarios, including cyber attack forensics, automated incident response, and code auditing. Finally, it examines the critical challenges encountered by LLMs in the field of cyber security and prospects future research directions. This paper profoundly reveals the core pathway and implementation mechanism underlying the paradigm shift from “passive response” to “active defense” in the cyber security domain driven by LLMs, thereby providing systematic technical references for academic research and engineering practice in this field.

Abstract 12 | PDF Downloads 4

References

  • [1] Yao, Y., Duan, J., Xu, K., Cai, Y., Sun, Z. and Zhang, Y. A survey on large language model (LLM) security and privacy: The Good, The Bad, and The Ugly. High-Confidence Computing. 2024, 4(2), p. 100211. https://doi.org/10.1016/j.hcc.2024.100211.
  • [2] Morgan, S. Cybercrime To Cost The World $10.5 Trillion Annually By 2025. Available from: https://cybersecurityventures.com/cyberwarfare-report-intrusion (accessed 8 January 2026).
  • [3] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł. and Polosukhin, I. Attention is all you need. In Advances in neural information processing systems, New York, 2017; pp. 6000-6010.
  • [4] Devlin, J., Chang, M.-W., Lee, K. and Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers), Minneapolis, Minnesota, USA, 2019; pp. 4171-4186.
  • [5] Li, Z., Dutta, S. and Naik, M. IRIS: LLM-assisted static analysis for detecting security vulnerabilities. arXiv preprint arXiv:2405.17238. 2024. https://doi.org/10.48550/arXiv.2405.17238.
  • [6] Liu, F., Zhang, Y., Luo, J., Dai, J., Chen, T., Yuan, L., Yu, Z., Shi, Y., Li, K. and Zhou, C. Make agent defeat agent: Automatic detection of {Taint-Style} vulnerabilities in {LLM-based} agents. In 34th USENIX Security Symposium (USENIX Security 25), Seattle, Washington, USA, 2025; pp. 3767-3786.
  • [7] Packer, C., Fang, V., Patil, S. G., Lin, K., Wooders, S. and Gonzalez, J. E. MemGPT: Towards LLMs as Operating Systems. arXiv preprint arXiv:2310.08560. 2023. https://doi.org/10.48550/arXiv.2310.08560.
  • [8] Tran, K.-T., Dao, D., Nguyen, M.-D., Pham, Q.-V., O’Sullivan, B. and Nguyen, H. D. Multi-agent collaboration mechanisms: A survey of llms. arXiv preprint arXiv:2501.06322. 2025. https://doi.org/10.48550/arXiv.2501.06322.
  • [9] Fumero, S., Huang, K., Boffa, M., Giordano, D., Mellia, M., Houidi, Z. B. and Rossi, D. CyberSleuth: Autonomous Blue-Team LLM Agent for Web Attack Forensics. arXiv preprint arXiv:2508.20643. 2025. https://doi.org/10.48550/arXiv.2508.20643.
  • [10] Li, Y., Li, X., Wu, H., Zhang, Y., Cheng, X., Zhong, S. and Xu, F. Attention is all you need for llm-based code vulnerability localization. IACAPAP ArXiv (Online). 2024. https://doi.org/10.48550/arxiv.2410.15288.
  • [11] Lin, X., Zhang, J., Deng, G., Liu, T., Liu, X., Yang, C., Zhang, T., Guo, Q. and Chen, R. IRCopilot: Automated Incident Response with Large Language Models. arXiv preprint arXiv:2505.20945. 2025. https://doi.org/10.48550/arXiv.2505.20945.
  • [12] Mhatre, A., Nader, N., Diehl, P. and Gupta, D. Llm-guard: Large language model-based detection and repair of bugs and security vulnerabilities in c++ and python. arXiv preprint arXiv:2508.16419. 2025. https://doi.org/10.48550/arXiv.2508.16419.
  • [13] Sun, Z., Li, J., Wan, Y., Li, C., Zhang, H., Li, G., Liu, H., Lyu, C. and Hu, S. Ensembling Large Language Models for Code Vulnerability Detection: An Empirical Evaluation. arXiv preprint arXiv:2509.12629. 2025. https://doi.org/10.48550/arXiv.2509.12629.
  • [14] Ullah, S., Han, M., Pujar, S., Pearce, H., Coskun, A. and Stringhini, G. Can large language models identify and reason about security vulnerabilities? not yet. arXiv preprint arXiv:2312.12575. 2023.
  • [15] Biggio, B. and Roli, F. Wild patterns: Ten years after the rise of adversarial machine learning. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, Toronto, ON, Canada, 2018; pp. 2154-2156.