The Culture of Guilt and the Responsibility of the Physician when Using Clinical Decision Support System

 
PIIS023620070026423-2-1
DOI10.31857/S023620070026423-2
Publication type Article
Status Published
Authors
Affiliation: National Research University Higher School of Economics
Address: 20, Myasnitskaya Str., Moscow 101000, Russian Federation
Journal nameChelovek
EditionVolume 34 Issue 3
Pages9-23
Abstract

The purpose of this article is to substantiate the need to revise the structure of moral and criminal responsibility of the physician in a social system in which a culture of guilt is imposed on him, that is, sole personal responsibility for any medical error committed, including with the participation of the clinical decision support system. From the paternalistic model of the «doctor-patient» relationship, since the 1970s, there has been a transition to a model of collaborative responsible behavior, in which each party is obligated to know and understand the nature of clinical decisions, which must also be supported by a special form of informed consent for both patient and physician, if AI technologies are included in diagnostic and treatment procedures. At the same time, in fact, there is no clear regulation of the process of communication and work with information, as well as the personal responsibility of top management for the formation of a medical support system. Because of the fundamental opaqueness of the operation of constantly developing artificial intelligence technologies (using neural networks as an example), the final decision must remain with the physician. However, justification of his responsibility in the new technological conditions requires a new social contract within the professional environment regarding the conditions for implementing clinical decision support system into broad medical practice.

Keywordsаrtificial intelligence, trust, medical error, physician responsibility, clinical decision support system, culture of guilt, autonomy.
Received28.06.2023
Publication date28.06.2023
Number of characters27663
Cite  
100 rub.
When subscribing to an article or issue, the user can download PDF, evaluate the publication or contact the author. Need to register.

Number of purchasers: 0, views: 146

Readers community rating: votes 0

1. American College of Physicians. Ethics manual (4th ed.). Annals of Internal Medicine. 1998. N 128. P. 576–594.

2. Barocas S., Selbst A., and Raghavan M. The hidden assumptions behind counterfactual explanations and principal reasons. Proceedings of the 2020 International Conference on Fairness, Accountability, and Transparency. 2020. P. 80–89.

3. Bleher H., Braun M. Diffused responsibility: attributions of responsibility in the use of AI-driven clinical decision support systems. AI Ethics. 2022. N 2. P. 747–761.

4. Braun M., Hummel P., Beck S., et al. Primer on an ethics of AI-based decision support systems in the clinic. Journal of Medical Ethics. 2021. N 47. P. e3.

5. Chollet F. On the measure of intelligence, 2019, available at: https://arxiv.org/abs/1911.01547

6. Coeckelbergh M. Artificial intelligence, responsibility Attribution, and a relational Justification of explainability. Sci Eng Ethics. 2020. N 26. P. 2051–2068.

7. Dellermann D., Calma A., Lipusch N., Weber T., Weigel S., and Ebel P. The future of human-ai collaboration: a taxonomy of design knowledge for hybrid intelligence systems. Hawaii international Conference on System Sciences (HICSS). Hawaii, USA, 2019.

8. Di Nucci E. Should we be afraid of medical AI? J Med Ethics. 2019. N 45. P. 556–558.

9. Feenberg A. Critical theory of technology: An overview. Information technology in librarianship: New critical approaches. 2005. Vol. 1. Iss. 1. P. 47–64.

10. Feenberg A. Technosystem: the social life of reason. Harvard University Press, 2017.

11. Floridi L., Cowls J. A unified framework of five principles for AI in society. Machine Learning and the City: Applications in Architecture and Urban Design. May 2022. N 2. P. 535–545.

12. Ford E., Edelman N., Somers L., et al. (2021). Barriers and facilitators to the adoption of electronic clinical decision support systems: a qualitative interview study with UK general practitioners. BMC Med Inform Decis Mak. 2021. P. 21.

13. Funer F. The deception of certainty: how non-interpretable machine learning outcomes challenge the epistemic authority of physicians. A deliberative-relational approach. Med Health Care Philos. 2022. N 25.: P. 167–78.

14. Funer F., Liedtke W., Tinnemeyer S., et al. J Med Ethics, 2022.

15. Grote T., Berens P. On the ethics of algorithmic decision-making in healthcare. J Med Ethics. 2020. N 46. P. 205–211.

16. Hu X.-B. A methodological framework of human-machine Co-evolutionary intelligence for decision-making Support of ATM. Integrated Communications Navigation and Surveillance Conference (ICNS). 2020. P. 5C3-1–5C3-8.

17. Jonas H. The Imperative of Responsibility: In Search of an Ethics for the Technological Age. Chicago: University of Chicago Press, 1985.

18. Krijger J. Enter the metrics: critical theory and organizational operationalization of AI ethics. Ai & society. 2021. P. 1–11.

19. Krinkin K., Shichkina Y. and Ignatyev A. Co-evolutionary hybrid intelligence is a key concept for the world intellectualization. Kybernetes.2022. Vol. ahead-of-print. N ahead-of-print.

20. Langanke M., Liedtke W., Buyx A. Patients’ responsibility for their health. In: Schramme T., Edwards S., eds. Handbook of the Philosophy of Medicine. Cham; Heidelberg; New York; Dordrecht; London: Springer, 2016. P. 619–640.

21. Lepri B., Oliver N., and Pentland A. Ethical machines: the human-centric use of artificial intelligence. Science. 2021. Vol. 24, N. 3. P. 102–249.

22. Leslie D. Understanding Artificial Intelligence Ethics and Safety: A Guide for the Responsible Design and Implementation of AI Systems in the Public Sector. The Alan Turing Institute, 2019.

23. Makary M.A., Daniel M. Medical error — the third leading cause of death in the US BMJ. 2016. P. 353.

24. Nebeker C., Torous J., Bartlett E. RJ. Building the case for actionable ethics in digital health research supported by artificial intelligence. BMC Med. 2019. N 17. P. 137.

25. Reason J. T. Human error. Cambridge University Press, 1990.

26. Van Cauwenberge D, Van Biesen W, Decruyenaere J, et al. Many roads lead to Rome and the artificial intelligence only shows me one road: an interview study on physician attitudes regarding the implementation of computerised clinical decision support systems. BMC Med Ethics. 2022. P. 23.

27. Waterman A.D., Garbutt J., Hazel E., Dunagan W.C., Levinson W., Fraser V.J., & Gallagher T.H. The emotional impact of medical errors on practicing physicians in the United States and Canada. Joint Commission journal on quality and patient safety. 2007. N 33(8). P. 467–476.

28. Werner M.H. Verantwortung. In: Grunwald A., Hillerbrand R., eds. Handbuch Technikethik. Stuttgart: J.B. Metzler, 2021. P. 44–48.

Система Orphus

Loading...
Up