Research on Information Security Vulnerability Testing and Verification Methods and Protection Countermeasures for LLM Applications in Intelligent Cockpits
DOI:
https://doi.org/10.54097/qnhjph62Keywords:
Intelligent Cockpit, Large Language Model, Information Security, Vulnerability Testing, Protection CountermeasureAbstract
In recent years, the application of large language models (LLMs) in intelligent cockpits has been continuously deepened, significantly improving the natural human-vehicle interaction experience. However, the generative characteristics of models, multi-modal data interaction, and access to vehicle control privileges have also introduced a series of new information security vulnerabilities such as prompt injection, data privacy leakage, unauthorized vehicle manipulation, and model hallucinations. Traditional in-vehicle security testing systems mostly target vulnerabilities in in-vehicle operating systems, in-vehicle buses, and network communications, and are difficult to adapt to the semantic-level and generative security risks brought by LLMs. To address this issue, this paper classifies and analyzes the security vulnerabilities of LLM applications in intelligent cockpit scenarios, constructs a vulnerability testing and verification method integrating static auditing, dynamic fuzz testing, red team adversarial testing, and multi-modal boundary testing, and establishes a corresponding security evaluation index system. On this basis, a hierarchical protection strategy is proposed from five dimensions: input protection, model hardening, system isolation, data security, and compliance auditing. Verified through real-vehicle environment testing, the proposed testing method can effectively identify 92.7% of known security vulnerabilities. After deploying the protection countermeasures, the relevant attack success rate drops from 38.5% to 4.3%, which can provide technical support for the secure development, testing verification, and engineering implementation of LLMs in intelligent cockpits.
Downloads
References
[1] OWASP Foundation. OWASP Top 10 for Large Language Model Security Risks, 2024.
[2] Cyberspace Administration of China. Interim Measures for the Administration of Generative Artificial Intelligence Services, 2023.
[3] Cyberspace Administration of China and other departments. Several Provisions on the Administration of Automotive Data Safety (for Trial Implementation), 2021.
[4] ISO/SAE 21434. Road vehicles—Cybersecurity engineering, 2021.
[5] China Automotive Technology and Research Center Co., Ltd. Research Report on the Testing and Evaluation System for Information Security of Intelligent Connected Vehicles, 2024.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Journal of Computer Science and Artificial Intelligence

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.








