Explainable Artificial Intelligence in Decision Support Models for Healthcare 5.0

Keywords: explainable artificial intelligence, XAI, artificial intelligence, deep learning, Healthcare 5.0, personalized medicine

Abstract

Industry 5.0 was based on personalization — personalized services, smart devices, assistant robots, and now personalized medicine, a direction developed within the framework of the Healthcare 5.0 philosophy. This paper discusses the technological aspects of the application of new generation artificial intelligence models in the tasks of personalized medicine for Healthcare 5.0. The possibilities of using explanatory artificial intelligence models in healthcare tasks are analyzed. The classification of explainable artificial intelligence (XAI) methods is carried out, and the most popular XAI algorithms are considered. It also provides an overview of the application of XAI algorithms in medicine, which considers tasks, specific algorithms and architectures of artificial neural networks.

Author Biographies

Alexey Averkin, Russian University of Economics. G. V. Plekhanov, 36 Stremyanny lane, 117997, Moscow, Russia

Candidate of Sciences (Phys.-Math.), Associate Professor, Leading Researcher of the Research Center for Advanced Studies in Artificial Intelligence, REU them. G. V. Plekhanov, Moscow, averkin2003@inbox.ru

Sergey Yarushev, Russian University of Economics. G. V. Plekhanov, 36 Stremyanny lane, 117997, Moscow, Russia

Candidate of Sciences (Tech.), Associate Professor, Director of the Research Center for Advanced Research in Artificial Intelligence, REU them. G. V. Plekhanov, Moscow, sergey.yarushev@icloud.com

References

C. F. Pasluosta, H. Gassner, J. Winkler, J. Klucken, and B. M. Eskofier, “An emerging era in the management of parkinson’s disease: Wearable technologies and the internet of things,” IEEE Journal of Biomedical and Health Informatics, vol. 19, no. 6, pp. 1873–1881, 2015; doi:10.1109/JBHI.2015.2461555

P. A. Laplante and N. Laplante, “The internet of things in healthcare: Potential applications and challenges,” IT Professional, vol. 18, no. 3, pp. 2–4, 2016; do:10.1109/MITP.2016.42

B. Mohanta, P. Das, and S. Patnaik, “Healthcare 5.0: A paradigm shift in digital healthcare system using artificial intelligence, iot and 5g communication,” in 2019 Int. Conf. on Applied Machine Learning (ICAML), Bhubaneswar, India, pp. 191–196, 2019; doi:10.1109/ICAML48257.2019.00044

Next Move Strategy Consulting, “Explainable AI (XAI) Market (2021 to 2030),” in www.nextmsc.com, 2022. [Online]. Available: https://www.nextmsc.com/report/explainable-ai-Market

A. Shaban-Nejad, M. Michalowski, and D. L. Buckeridge, eds., “Explainable AI in Healthcare and Medicine,” in Studies in Computational Intelligence, Springer Cham, 2021; doi:10.1007/978-3-030-53352-6

J. Gao, N. Liu, M. Lawley, and X. Hu, “An Interpretable Classification Framework for Information Extraction from Online Healthcare Forums,” Journal of Healthcare Engineering, vol. 2017, pp. 1–12, 2017; doi:10.1155/2017/2460174

S. C.-H. Yang and P. Shafto, “Explainable Artificial Intelligence via Bayesian Teaching,” in Proc. of the 31st Conf. on Neural Information Processing Systems Workshop on Teaching Machines, Robots and Humans, pp. 127–137, 2017.

P. K. Sharma, N. Kumar, and J. H. Park, “Blockchain-Based Distributed Framework for Automotive Industry

in a Smart City,” IEEE Transactions on Industrial Informatics, vol. 15, no. 7, pp. 4197–4205, 2019; doi:10.1109/tii.2018.2887101

D. He, M. Ma, S. Zeadally, N. Kumar, and K. Liang, “Certificateless Public Key Authenticated Encryption With Keyword Search for Industrial Internet of Things,” IEEE Transactions on Industrial Informatics, vol. 14, no. 8, pp. 3618–3627, 2018; doi:10.1109/tii.2017.2771382

I. H. Khan and M. Javaid, “Role of Internet of Things (IoT) in Adoption of Industry 4.0,” Journal of Industrial Integration and Management, vol. 07, no. 04, pp. 515–533; 2021; doi:10.1142/s2424862221500068

J. H. Kim, “A Review of Cyber-Physical System Research Relevant to the Emerging IT Trends: Industry 4.0, IoT, Big Data, and Cloud Computing,” Journal of Industrial Integration and Management, vol. 02, no. 03, p. 1750011, 2017; doi:10.1142/s2424862217500117

H. Chen, “Theoretical Foundations for Cyber-Physical Systems: A Literature Review,” Journal of Industrial Integration and Management, vol. 02, no. 03, p. 1750013, 2017; doi:10.1142/s2424862217500130

K. A. Demir, G. D¨oven, and B. Sezen, “Industry 5.0 and Human-Robot Co-working,” Procedia Computer Science, vol. 158, pp. 688–695, 2019; doi:10.1016/j.procs.2019.09.104

L. D. Xu, E. L. Xu, and L. Li, “Industry 4.0: state of the art and future trends,” International Journal of Production Research, vol. 56, no. 8, pp. 2941–2962, 2018; doi:10.1080/00207543.2018.1444806

M. Rada, “Industry 5.0 definition,” in https://michael-rada.medium.com 2020. [Online]. Available: https://michael-rada.medium.com/industry-5-0-definition-6a2f9922dc48

S. Nahavandi, “Industry 5.0 — A Human-Centric Solution,” Sustainability, vol. 11, no. 16, p. 4371, 2019; doi:10.3390/su11164371

N. A. Smuha, “The EU Approach to Ethics Guidelines for Trustworthy Artificial Intelligence,” Computer Law Review International, vol. 20, no. 4, pp. 97–106, 2019; doi:10.9785/cri-2019-200402

J. Wu and R. J. Mooney, “Faithful Multimodal Explanation for Visual Question Answering,” in arXiv:1809.02805, 2018.

K. Simonyan and A.Vedaldi, “Zisserman, A. Deep inside convolutional networks: Visualising image classification models and saliency maps,” arXiv:1312.6034, 2013.

S. Bach, A. Binder, G. Montavon, F. Klauschen, K.-R. Muller, and W. Samek, “On Pixel-Wise Explanations for Non Linear Classifier Decisions by Layer-Wise Relevance Propagation,” PLOS ONE, vol. 10, no. 7, p. e0130140, 2015; doi:10.1371/journal.pone.0130140

V. K. Finn et al., eds., “Logic in artificial intelligence systems,” Multivalued logics and their applications, vol. 2, Moscow: Publishing house LKI, 2008 (in Russian).

G. Stiglic, P. Kocbek, N. Fijacko, M. Zitnik, K. Verbert, and L. Cilar, “Interpretability of machine learning-based prediction models in healthcare,” WIREs Data Mining and Knowledge Discovery, vol. 10, no. 5, 2020; doi:10.1002/widm.1379

V. Arya et al., “One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques,” v. 2, in arXiv:1909.03012 [Preprint], 2019.

G. Vilone and L. Longo, “Explainable artificial intelligence: a systematic review,” v. 4, in arXiv:2006.00093 [Preprint], 2020.

A. Singh, S. Sengupta, and V. Lakshminarayanan, “Explainable Deep Learning Models in Medical Image Analysis,” Journal of Imaging, vol. 6, no. 6, p. 52, 2020; doi:10.3390/jimaging6060052

L. Arras, A. Osman, K.-R. Muller, and W. Samek, “Evaluating recurrent neural network explanations,” in ¨ arXiv:1904.11829, 2019.

M. T. Ribeiro, S. Singh, and C. Guestrin, “Model-agnostic interpretability of machine learning,” in arXiv:1606.05386, 2016.

W. Samek and K.-R. Muller, “Towards explainable artificial intelligence,” in ¨ Explainable AI: interpreting, explaining and visualizing deep learning, Springer Cham, pp. 5–22, 2019.

N. Papernot and P. McDaniel, “Deep k-nearest neighbors: Towards confident, interpretable and robust deep learning,” in arXiv:1803.04765, 2018.

M. Hind et al, “Ted: Teaching ai to explain its decisions,” in Proc. of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 123–129, 2019. doi:10.1145/3306618.3314273

Q. Zhang, Y. N. Wu, and S.-C. Zhu, “Interpretable convolutional neural networks,” in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8827–8836, 2018. doi:10.1109/cvpr.2018.00920

S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” in Proc. of the 31st international conference on neural information processing systems, pp. 4768–4777, 2017.

S. M. Lundberg, G. G. Erion, and S.-I. Lee, “Consistent individualized feature attribution for tree ensembles,” in arXiv:1802.03888, 2018.

S. Sharma, J. Henderson, and J. Ghosh, “Certifai: Counterfactual explanations for robustness, transparency, interpretability, and fairness of artificial intelligence models,” in arXiv:1905.07857, 2019.

S. Wachter, B. Mittelstadt, and C. Russell, “Counterfactual explanations without opening the black box: Automated decisions and the gdpr,” Harv. JL & Tech., vol. 31, p. 841, 2017.

G. Montavon, A. Binder, S. Lapuschkin, W. Samek, and K.-R. Muller, “Layer-wise relevance propagation: an overview,” in Explainable AI: interpreting, explaining and visualizing deep learning, pp. 193–209, 2019.

A. Heuillet, F. Couthouis, and N. Dıaz-Rodr ˊ ıguez, “Explainability in deep reinforcement learning,” ˊ Knowledge Based Systems, vol. 214, p. 106685, 2021; doi:10.1016/j.knosys.2020.106685

A. Verma, V. Murali, R. Singh, P. Kohli, and S. Chaudhuri, “Programmatically interpretable reinforcement learning,” in Proc. of. International Conference on Machine Learning. PMLR, 2018, pp. 5045–5054, 2018.

B. Wymann, E. Espie, C. Guionneau, C. Dimitrakakis, R. Coulom, and A. Sumner, “Torcs, the open racing car simulator,” in http://torcs.sourceforge.net, vol. 4, no. 6, p. 2, 2000.

T. Shu, C. Xiong, and R. Socher, “Hierarchical and interpretable skill acquisition in multi-task reinforcement learning,” in arXiv:1712.07294 [Preprint], 2017.

P. Madumal, T. Miller, L. Sonenberg, and F. Vetere, “Explainable reinforcement learning through a causal lens,” in Proc. of the AAAI Conference on Artificial Intelligence, vol. 34, no. 03, pp. 2493–2500, 2020.

E. Pierson, D. M. Cutler, J. Leskovec, S. Mullainathan, and Z. Obermeyer, “An algorithmic approach to reducing unexplained pain disparities in underserved populations,” Nature Medicine, vol. 27, no. 1, pp. 136–140, 2021; doi:10.1038/s41591-020-01192-7

J. Born et al., “Accelerating Detection of Lung Pathologies with Explainable Ultrasound Image Analysis,” Applied Sciences, vol. 11, no. 2, p. 672, 2021; doi:10.3390/app11020672

G. Jia, H.-K. Lam, and Y. Xu, “Classification of COVID-19 chest X-Ray and CT images using a type of dynamic CNN modification method,” Computers in Biology and Medicine, vol. 134, p. 104425, 2021; doi:10.1016/j.compbiomed.2021.104425

Y. Song et al., “Deep Learning Enables Accurate Diagnosis of Novel Coronavirus (COVID-19) With CT Images,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 18, no. 6, pp. 2775–2780, 2021; doi:10.1109/tcbb.2021.3065361

S.-H. Wang, V. V. Govindaraj, J. M. Gorriz, X. Zhang, and Y.-D. Zhang, “Covid-19 classification by FGCNet with deep feature fusion from graph convolutional network and convolutional neural network,” Information Fusion, vol. 67, pp. 208–229, 2021; doi:10.1016/j.inffus.2020.10.004

Z. Wang et al., “Automatically discriminating and localizing COVID-19 from community-acquired pneumonia on chest X-rays,” Pattern Recognition, vol. 110, p. 107613, 2021; doi:10.1016/j.patcog.2020.107613

R. T. Sutton, O. R. Za¨ıane, R. Goebel, and D. C. Baumgart, “Artificial intelligence enabled automated diagnosis and grading of ulcerative colitis endoscopy images,” Scientific Reports, vol. 12, no. 1, 2022; doi:10.1038/s41598-022-06726-2

Y.-H. Wu et al., “JCS: An Explainable COVID-19 Diagnosis System by Joint Classification and Segmentation,” IEEE Transactions on Image Processing, vol. 30, pp. 3113–3126, 2021; doi:10.1109/tip.2021.3058783

S. Lu, Z. Zhu, J. M. Gorriz, S. Wang, and Y. Zhang, “NAGNN: Classification of COVID-19 based on neighboring aware representation from deep graph neural network,” Int. J. Intell. Syst., vol. 37, no. 2, pp. 1572–1598, 2022; doi:10.1002/int.22686

A. Haghanifar, M. M. Majdabadi, Y. Choi, S. Deivalakshmi, and S. Ko, “COVID-CXNet: Detecting COVID-19 in frontal chest X-ray images using deep learning,” Multimedia Tools and Applications, vol. 81, no. 21, pp. 30615–30645, 2022; doi:10.1007/s11042-022-12156-z

N. S. Punn and S. Agarwal, “Automated diagnosis of COVID-19 with limited posteroanterior chest X-ray images using fine-tuned deep neural networks,” Applied Intelligence, vol. 51, no. 5, pp. 2689–2702, 2020; doi:10.1007/s10489-020-01900-3

H. Wang, S. Wang, Z. Qin, Y. Zhang, R. Li, and Y. Xia, “Triple attention learning for classification of 14 thoracic diseases using chest radiography,” Medical Image Analysis, vol. 67, p. 101846, 2021; doi:10.1016/j.media.2020.101846

B. Hu, B. Vasu, and A. Hoogs, “X-MIR: EXplainable Medical Image Retrieval,” in Proc. of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 4–8 January 2022, pp. 440–450, 2022.

Y. Du, A. R. Rafferty, F. M. McAuliffe, L. Wei, and C. Mooney, “An explainable machine learning-based clinical decision support system for prediction of gestational diabetes mellitus,” Scientific Reports, vol. 12, no. 1, p. 1170, 2022; doi:10.1038/s41598-022-05112-2

C. Severn, K. Suresh, C. G¨org, Y. S. Choi, R. Jain, and D. Ghosh, “A Pipeline for the Implementation and Visualization of Explainable Machine Learning for Medical Imaging Using Radiomics Features,” Sensors, vol. 22, no. 14, p. 5205, 2022; doi:10.3390/s22145205

N. Q. K. Le, Q. H. Kha, V. H. Nguyen, Y.-C. Chen, S.-J. Cheng, and C.-Y. Chen, “Machine Learning-Based Radiomics Signatures for EGFR and KRAS Mutations Prediction in Non-Small-Cell Lung Cancer,” International Journal of Molecular Sciences, vol. 22, no. 17, p. 9254, 2021; doi:10.3390/ijms22179254

S. H. P. Abeyagunasekera, Y. Perera, K. Chamara, U. Kaushalya, P. Sumathipala, and O. Senaweera, “LISA : Enhance the explainability of medical images unifying current XAI techniques,” in Proc. of 2022 IEEE 7th International conference for Convergence in Technology (I2CT), Apr. 2022, 2022; doi:10.1109/i2ct54291.2022.9824840

J. Duell, X. Fan, B. Burnett, G. Aarts, and S.-M. Zhou, “A Comparison of Explanations Given by Explainable Artificial Intelligence Methods on Analysing Electronic Health Records,” in Proc. of 2021 IEEE EMBS International Conference on Biomedical and Health Informatics (BHI), Jul. 2021, 2021; doi:10.1109/bhi50953.2021.9508618

G. Liu et al., “Medical-VLBERT: Medical Visual Language BERT for COVID-19 CT Report Generation With Alternate Learning,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 9, pp. 3786–3797, 2021; doi:10.1109/tnnls.2021.3099165

Published
2023-06-30
How to Cite
Averkin, A., & Yarushev, S. (2023). Explainable Artificial Intelligence in Decision Support Models for Healthcare 5.0. Computer Tools in Education, (2), 41-61. https://doi.org/10.32603/2071-2340-2023-2-41-61
Section
Artificial intelligence and machine learning