Currently submitted to: JMIR Medical Informatics
Date Submitted: Dec 31, 2025
Open Peer Review Period: Jan 8, 2026 - Mar 5, 2026
(currently open for review)
Warning: This is an author submission that is not peer-reviewed or edited. Preprints - unless they show as "accepted" - should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
The Clinical Generalizability Gap in AI-based Alzheimer's Diagnosis: A Systematic Analysis of Deficits and Proposed Practical Solutions
ABSTRACT
Background:
Despite the high potential of artificial intelligence (AI) in diagnosing Alzheimer's disease, a profound gap exists between reported accuracy in ideal conditions and models' reliable performance in real-world clinical settings.
Objective:
This systematic analysis aimed to identify the root causes of this gap and propose practical solutions.
Methods:
We conducted a systematic analysis in accordance with PRISMA 2020, analyzing 56 studies (2013-2023). A qualitative content analysis was performed around four pillars: 1) Data repository characteristics, 2) Data preprocessing and model design, 3) Technical implementation frameworks, and 4) Performance evaluation protocols.
Results:
Results indicate a methodological transition towards standardized data repositories and modern AI frameworks. However, rapid algorithm development has outpaced the maturity required for clinical generalizability. Four key deficits were identified: 1. Data limitations due to reliance on restricted, low-diversity datasets (63% of studies used ADNI exclusively). 2. Insufficient standardization in preprocessing and modeling, prioritizing 'convenience' over 'generalizability'. 3. A disconnect between technical capabilities and critical clinical needs (only 7% focused on the crucial sMCI/pMCI distinction). 4. Deficiencies in evaluation protocols, notably scarce multi-center validation (only 7%) and inadequate reporting of comprehensive metrics (96% relied solely on Accuracy). Practical solutions to address these deficits across data, modeling, and evaluation domains are prop osed.
Conclusions:
Transitioning from 'accuracy under ideal conditions' to 'reliability in real-world settings' is an unavoidable necessity. This requires investment in multi-center data repositories, alignment of models with clinical needs, and institutionalizing comprehensive evaluations. The findings and recommendations are generalizable to other domains of AI-based disease diagnosis.
Citation
Request queued. Please wait while the file is being generated. It may take some time.
Copyright
© The authors. All rights reserved. This is a privileged document currently under peer-review/community review (or an accepted/rejected manuscript). Authors have provided JMIR Publications with an exclusive license to publish this preprint on it's website for review and ahead-of-print citation purposes only. While the final peer-reviewed paper may be licensed under a cc-by license on publication, at this stage authors and publisher expressively prohibit redistribution of this draft paper other than for review purposes.