Responsibility Definition and Risk Management in the Clinical Application of Medical Artificial Intelligence:A Review Based on Four-Level Classification
Main Article Content
Keywords
medical artificial intelligence, responsibility definition, risk management, four-level classification system, dynamic responsibility matrix, full-lifecycle governance
Abstract
The rapid penetration of medical artificial intelligence (MAI) into clinical diagnosis and treatment scenarios has reshaped the traditional medical service model. However, issues such as ambiguous responsibility definition and lagging risk management have severely constrained its safe and compliant development. This article uses the four-level AI classification system (tool-type, advisor-type, collaborative-type, autonomous-type) as the analytical framework to systematically review the responsibility definition logic and full-lifecycle risk management mechanisms for MAI clinical applications. The study finds that a multi-stakeholder responsibility system covering manufacturers, medical institutions, doctors, and regulatory authorities has been formed, along with a “prevention-control-remediation” risk management closed loop. However, deficiencies still exist in empirical validation, standard unification, adaptation to special scenarios, and coordination between technology and institutions. Future research should focus on directions such as dynamic responsibility quantification and matching, practical design of insurance pools, and development of lightweight management tools for primary care, providing theoretical support and practical references for the systematic governance of MAI clinical applications.
References
- [1] Gong, M. C., Ma, Y. H., Pan, H., Bai, H., Dai, H., Chen, W., ... & Ji, X. M. (2025). Expert consensus on ethical governance of clinical applications of generative medical artificial intelligence (2025 edition). Acta Academiae Medicinae Sinicae, 1-14.
- [2] Gong, M. C., Li, Y. H., Ma, Y. H., Gong, K., Liu, C., Ouyang, Z. H., & Dai, H. (2026). Ethical governance of generative medical artificial intelligence: Three-dimensional collaborative path and Chinese practice. Journal of Medical Informatics, 47 (01), 2-8+23.
- [3] Zhang, C. (2026). The legal dilemma of medical artificial intelligence in China: challenges to physicians' duty to inform and a typology-based response. Frontiers in Public Health, 13, 1747635-1747635. https://doi.org/10.3389/FPUBH.2025.1747635.
- [4] Cheng, W. M., & Li, G. M. (2025). Medical artificial intelligence: From technical performance to clinical utility. Guangdong Medical Journal, 46 (11), 1601-1605. https://doi.org/10.13820/j.cnki.gdyx.20253316.
- [5] He, C. L., & Qiu, R. (2025). AI empowering biomedicine to reshape the future of life sciences—Entering the medical artificial intelligence center of the Hangzhou Institute of Medicine, Chinese Academy of Sciences. High Technology and Industrialization, 31 (11), 16-18. https://doi.org/10.26927/j.cnki.hitech.2025.11.004.
- [6] Huang, L. N., & Zhang, L. (2025). Exploration of legal responsibility subjects in tortious acts of artificial intelligence medical products. Journal of Health Law, 33 (06), 32-42. https://doi.org/10.19752/j.cnki.2097-5058.2025.06.004.
- [7] Duffourc, M. & Gerke, S. (2023). Generative AI in Health Care and Liability Risks for Physicians and Safety Concerns for Patients.. JAMA, 330 (4), https://doi.org/10.1001/JAMA.2023.9630.
- [8] Luo, S. n., & Wang, H. P. (2022). Advances in the application of artificial intelligence in the field of emergency nursing. Nursing Research, 36 (05), 884-887.
- [9] Sheller, M. J., Edwards, B., Reina, G. A., Martin, J., Pati, S., Kotrotsou, A.... & Bakas, S. (2020). Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data.. Scientific Reports, 10 (1), 12598. https://doi.org/10.1038/s41598-020-69250-1.
