The next frontier of explainable artificial intelligence (XAI) in healthcare services: A study on PIMA diabetes dataset
Downloads
Published
DOI:
https://doi.org/10.58414/SCIENTIFICTEMPER.2025.16.5.01Keywords:
Explainable AI, Healthcare AI, Model Interpretability, Clinical Decision Support, Diabetes Prediction, PIMA Diabetes Dataset, Transparent Machine Learning.Dimensions Badge
Issue
Section
License
Copyright (c) 2025 The Scientific Temper

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
The integration of Artificial Intelligence (AI) in healthcare has revolutionized disease diagnosis and risk prediction. However, the "black-box" nature of AI models raises concerns about trust, interpretability, and regulatory compliance. Explainable AI (XAI) addresses these issues by enhancing transparency in AI-driven decisions. This study explores the role of XAI in diabetes prediction using the PIMA Diabetes Dataset, evaluating machine learning models—logistic regression, decision trees, random forests, and deep learning—alongside SHAP and LIME explainability techniques. Data pre-processing includes handling missing values, feature scaling, and selection. Model performance is assessed through accuracy, AUC-ROC, precision-recall, F1-score, and computational efficiency. Findings reveal that the Random Forest model achieved the highest accuracy (93%) but required post-hoc explainability. Logistic Regression provided inherent interpretability but with lower accuracy (81%). SHAP identified glucose, BMI, and age as key diabetes predictors, offering robust global explanations at a higher computational cost. LIME, with lower computational overhead, provided localized insights but lacked comprehensive interpretability. SHAP’s exponential complexity limits real-time deployment, while LIME’s linear complexity makes it more practical for clinical decision support.These insights underscore the importance of XAI in enhancing transparency and trust in AI-driven healthcare. Integrating explainability techniques can improve clinical decision-making and regulatory compliance. Future research should focus on hybrid XAI models that optimize accuracy, interpretability, and computational efficiency for real-time deployment in healthcare settings.Abstract
How to Cite
Downloads
Similar Articles
- R. Kalaiselvi, P. Meenakshi Sundaram, Machine learning-based ERA model for detecting Sybil attacks on mobile ad hoc networks , The Scientific Temper: Vol. 15 No. 04 (2024): The Scientific Temper
- R. Kalaiselvi, P. Meenakshi Sundaram, Unified framework for sybil attack detection in mobile ad hoc networks using machine learning approach , The Scientific Temper: Vol. 16 No. 02 (2025): The Scientific Temper
- Subna MP, Kamalraj N, Human Activity Recognition through Skeleton-Based Motion Analysis Using YOLOv8 and Graph Convolutional Networks , The Scientific Temper: Vol. 16 No. 12 (2025): The Scientific Temper
- Poornima Dave, Aditi Shrimali, MATRIMANAS digital app for maternal mental healthcare: A research proposal , The Scientific Temper: Vol. 16 No. Spl-1 (2025): The Scientific Temper
- Pritee Rajaram Ray, Bijal Zaveri, The role of technology in implementing effective education for children with learning difficulties , The Scientific Temper: Vol. 15 No. 04 (2024): The Scientific Temper
- Josephine Theresa S, Graph Neural Network Ensemble with Particle Swarm Optimization for Privacy-Preserving Thermal Comfort Prediction , The Scientific Temper: Vol. 16 No. 12 (2025): The Scientific Temper
- Nithya R, Kokilavani T, Joseph Charles P, Multi-objective nature inspired hybrid optimization algorithm to improve prediction accuracy on imbalance medical datasets , The Scientific Temper: Vol. 15 No. 03 (2024): The Scientific Temper
- Somalee Mahapatra, Manoranjan Dash, Subhashis Mohanty, Adoption of artificial intelligence and the internet of things in dental biomedical waste management , The Scientific Temper: Vol. 15 No. 03 (2024): The Scientific Temper
- Krishna P. Kalyanathaya, Krishna Prasad K, A framework for generating explanations of machine learning models in Fintech industry , The Scientific Temper: Vol. 15 No. 02 (2024): The Scientific Temper
- R. Sakthiraman, L. Arockiam, RFSVMDD: Ensemble of multi-dimension random forest and custom-made support vector machine for detecting RPL DDoS attacks in an IoT-based WSN environment , The Scientific Temper: Vol. 16 No. 03 (2025): The Scientific Temper
<< < 3 4 5 6 7 8 9 10 11 12 > >>
You may also start an advanced similarity search for this article.
Most read articles by the same author(s)
- Radha K. Jana, Dharmpal Singh, Saikat Maity, Modified firefly algorithm and different approaches for sentiment analysis , The Scientific Temper: Vol. 15 No. 01 (2024): The Scientific Temper

