Alzheimer’s from Biology to Tests and AI Support

Research Article
Open access

Alzheimer’s from Biology to Tests and AI Support

Haoyu Chen 1*
  • 1 Santa Monica College, Santa Monica, California, United States    
  • *corresponding author A1670194659@outlook.com
Published on 26 November 2025 | https://doi.org/10.54254/2755-2721/2025.LD30060
ACE Vol.210
ISSN (Print): 2755-2721
ISSN (Online): 2755-273X
ISBN (Print): 978-1-80590-567-7
ISBN (Online): 978-1-80590-568-4

Abstract

Alzheimer’s disease (AD) is a health problem that can cause memory loss, trouble with thinking, and a decline in daily life. This health problem will bring a lot of stress to both patients and families. For a long time, diagnosis relied mainly on symptoms, but these symptoms appear late and overlap with other health problems. In recent years, a “biology first” approach has been developed, using biomarkers such as amyloid, tau, and neurodegeneration to give clearer and earlier answers. At the same time, Artificial intelligence (AI) becomes more important, for example, AI can analyze scans, speech, and clinical data. But there are still some problems, for example it is hard to make sure that everyone uses it fairly. This review brings together the current view of AD, the role of clinical and biological checks, and the growing support of AI. Its main goal is to help readers understand how diagnosis is moving from late symptom-based methods to earlier and more reliable systems.

Keywords:

AI in AD care, Alzheimer’s disease (AD), MRI, PET, speech data

Chen,H. (2025). Alzheimer’s from Biology to Tests and AI Support. Applied and Computational Engineering,210,36-40.
Export citation

1. Introduction

Alzheimer’s disease (AD) may bring many problems. For example, it can cause memory loss, trouble with thinking, and problems in daily life. It also puts pressure on the healthcare system. For many years, doctors mainly looked at symptoms and used short tests to find AD. But symptoms often appear late and can look like other health problems. A new “biology first” idea gives a clearer way. In 2018, experts suggested the AT(N) system, which looks at three kinds of changes in the brain: amyloid, tau, and neurodegeneration.

AD not only changes the life of the person who has it but also brings many stresses to their family. Taking care of AD patients often can make people feel tired, both in their bodies and minds. They also face money problems. The disease gets worse slowly. At first, it may just seem like some small memory trouble. Later, it can affect speaking, decision-making, and basic daily skills. These changes can take many years to show. So, if people wait for clear symptoms, it may be too late. That’s why doctors and scientists started to look for earlier and more reliable ways to find AD. The “biology first” idea became very important.

This paper looks at AD from three parts. First, it talks about the change from looking only at symptoms to using brain and blood tests, based on the AT(N) system and the 2024 rules. Second, it looks at the role of simple tests like MMSE, MoCA, and CDR, which are still helpful in clinics. Third, it talks about how Artificial intelligence (AI) can help doctors find AD and see how it changes, especially when using different types of data like MRI, PET, speech, and blood.

The goal of this review is to focus on helping people understand how to get better at finding AD. By using both biology and AI tools, doctors can find the disease earlier and which can give much help to the patients and families in a better way.

2. Alzheimer’s defining and checking

2.1. Basic information about Alzheimer’s disease

Alzheimer’s disease (AD) is now defined mainly by biology. The 2018 NIA–AA framework says AD “is defined in vivo by biomarkers,” not by symptoms alone [1]. These biomarkers are grouped as amyloid (A), tau (T), and neurodegeneration (N). This AT(N) system separates the presence of AD biology from the level of symptoms. The 2024 revision keeps this view and aims to “present objective criteria for diagnosis and staging AD,” so that daily practice can follow clear rules [2]. It also gives blood-based biomarkers a formal place next to CSF and PET. A simple rule follows in clinics: first confirm AD biology, then stage the disease.

Large, shared cohorts made this shift possible. ADNI calls itself “a longitudinal, multi-center, observational study” whose goal is “to validate biomarkers for Alzheimer’s disease (AD) clinical trials” [3]. It shares MRI, PET, fluid, genetic, and cognitive data from many sites. Open sharing allows fair testing and external validation. It also lets studies track change from normal aging to mild cognitive impairment (MCI) and to dementia.

Clinical assessment still matters. The Mini-Mental State Examination (MMSE) was introduced as “a practical method for grading the cognitive state of patients for the clinician” and is quick to use [4]. The Montreal Cognitive Assessment (MoCA) was designed as “a brief screening tool for mild cognitive impairment” and often finds early change better than MMSE [5]. The Clinical Dementia Rating (CDR) has “current version and scoring rules” and supports global staging over time [6]. These tools are cheap and easy. Scores can vary with language, education, and rater skill, so very early change may be missed. Biomarker rules and quantitative models work best as complements, not replacements.

2.2. How AI helps in Alzheimer’s care

Artificial intelligence (AI) now supports tasks that match clinic needs: detect AD, separate AD from non-AD, grade stage, and predict MCI to dementia. Single-modality imaging shows what one source can do. A 3D network learned directly from T1-weighted MRI and reached strong group separation and conversion prediction using “a single MRI,” without hand-crafted features [7]. Still, accuracy can drop when scanners or sites change. This domain-shift problem calls for harmonization and independent external tests beyond internal cross-validation.

Speech is a low-burden digital marker. The ADReSS shared task set up “a shared task…based on spontaneous speech” with two targets: dementia classification and MMSE regression, on a balanced dataset [8]. This makes method comparison fair. Results show useful acoustic and language signals. But microphones, languages, and rooms differ. Real value may be in screening and follow-up, especially where imaging is hard to access.

Multi-modal fusion often works better because signals are complementary. On ADNI, deep models that mix inputs beat single-modality baselines. One study reports that “integrating multi-modality data outperforms single modality models in terms of accuracy, precision, recall, and mean F1 scores” [9]. A Nature Communications pipeline “accomplishes multiple diagnostic steps in a successive fashion,” which mirrors clinical flow from broad screening to refined labels [10]. On OASIS-3, a Scientific Reports study shows that full 3D volumes “learn more effective representations than…2D images,” and that adding amyloid PET “enhances…performance” over MRI alone [11]. This fits the biology: MRI shows structure and atrophy, amyloid PET shows a defining pathology, and clinical features reflect history and function.

Reviews give balance. A 2024 meta-analysis of MRI-based machine learning found gains but also “significant heterogeneity” in datasets and pipelines [12]. A 2025 review on MCI→dementia prediction found “limited generalizability and high risk of bias,” and noted that fully independent external validation is still rare [13]. These points warn against strong claims from single-site, retrospective work. They also argue for reporting calibration and decision-curve analysis, not only accuracy or AUC.

A clear division of labor is forming. Biomarkers define the disease. Clinical tools measure function and track change. AI systems fuse evidence to improve sensitivity and consistency, especially early. The best pipelines follow the 2024 rule: confirm biology with approved “Core 1” biomarkers, then estimate stage or risk using images, tests, and perhaps blood [2]. ADNI and similar data help, but over-fitting to a few cohorts is a risk. Real-world use needs prospective checks and a good workflow fit.

Key limits are real but fixable. Models lose accuracy when scanners, protocols, or patient groups differ. This domain shift is common and should be tested outside the training set. Good studies plan independent external validation in advance and report results by site, device, and basic demographics. Under the 2024 scheme, outputs also need good calibration, so the same risk score means the same thing in each clinic. Explanations should point to known atrophy or amyloid patterns, not only wide heat maps. Speech features look useful for low-burden screening if datasets grow across languages and devices and if endpoints match outcomes that matter for care. As blood tests become stable and collection more uniform, fusing MRI, PET, cognitive tests, and blood is the path most likely to give robust and trusted support in clinics.

Age is one of the biggest factors for late-onset disease. Education and mental activity may delay symptoms. Early complaints often connect with episodic memory and some of them start with language, visuospatial, or executive change. Many patients pass through MCI before dementia.

A clinic workflow is also needed. First, take history and do some brief tests. Second, use MRI to look for atrophy and to find out other causes. Third, confirm AD biology when therapy or trial decisions are near, because “An abnormal Core 1 biomarker is sufficient to establish a diagnosis of AD” [2]. Fourth, stage with CDR and track change with MMSE or MoCA [4–6]. Just like the article said, “staging…applies only to individuals in whom the disease has been diagnosed by means of Core 1 biomarkers” [2].

Method details can make the claims clearer. A 3D network trained on whole-brain T1 MRI can classify groups and predict MCI conversion with “a single MRI.” It still needs checks on independent sites to avoid hidden bias [7]. For speech, the ADReSS team set “a shared task…based on spontaneous speech,” with fixed splits and targets. Models that use both sound and words beat baselines, but wider tests across devices and languages are still needed [8].

Fusion often brings some help. On ADNI, mixing MRI with clinical or genetic data improves common metrics; “integrating multi-modality data outperforms single modality models” [9]. A Nature Communications pipeline “accomplishes multiple diagnostic steps in successive fashion,” which matches clinic flow and supports use at the bedside [10]. On OASIS-3, full 3D inputs “learn more effective representations,” and adding amyloid PET “enhances…performance” beyond MRI alone [11]. These advantages fit a lot with biology, for example, MRI shows structure and atrophy; PET shows a defining pathology; clinical data can carry history and function.

Evaluation must go beyond accuracy. External validation should hold out at least one site and one window. Reports should include site/vendor-wise results for imaging and device/setting-wise results for speech. Calibration should make a “0.70 risk” mean about 70% in the target clinic, and decision-curve analysis should show net benefit across realistic thresholds. These steps answer the reviewers’ concerns about “heterogeneity” and “limited generalizability” [12,13].

AI helps in clinics by cutting the time to a biologic diagnosis, sending the right patients to specialists, and keeping staging more consistent. “Silent” trials can quietly watch model accuracy when scanners or lab tests change and tell teams when to recalibrate. Mixing AI with blood tests can catch disease earlier and keep risk scores reliable [9-11]. Used this way, AI gives earlier, steadier support while doctors stay in charge.

3. Conclusion

This paper gives an overview of Alzheimer’s disease from three main angles: how the disease is now defined by biology, how clinical tools are still used to check memory and function, and how new methods such as AI and multi-modal data are being added to improve diagnosis. By looking at both traditional and modern approaches, it shows the path from symptoms to biomarkers and from single tools to combined systems.

The main value of this review is to help readers understand what Alzheimer’s disease is, how it can be checked, and how these checks are being updated with new science and technology. It highlights that progress is being made step by step, and that combining biology, clinical tools, and new methods gives a clearer and more reliable way to detect and follow the disease.


References

[1]. Jack, C. R., Jr., et al. (2018). NIA–AA research framework: Toward a biological definition of Alzheimer’s disease. Alzheimer’s & Dementia. Defined in vivo by biomarkers.

[2]. Jack, C. R., Jr., et al. (2024). Revised criteria for diagnosis and staging of Alzheimer’s disease. Alzheimer’s & Dementia. Present objective criteria for diagnosis and staging AD.

[3]. ADNI official site. (n.d.). A longitudinal, multi-center, observational study…validate biomarkers for Alzheimer’s disease clinical trials.

[4]. Folstein, M. F., Folstein, S. E., & McHugh, P. R. (1975). Mini-mental state: A practical method for grading the cognitive state of patients for the clinician. Journal of Psychiatric Research.

[5]. Nasreddine, Z. S., et al. (2005). The Montreal Cognitive Assessment (MoCA): A brief screening tool for mild cognitive impairment. Journal of the American Geriatrics Society.

[6]. Morris, J. C. (1993). The Clinical Dementia Rating (CDR): Current version and scoring rules. Neurology.

[7]. Basaia, S., et al. (2019). Automated classification of Alzheimer’s disease and mild cognitive impairment using a single MRI and deep neural networks. NeuroImage: Clinical. A single MRI deep model for diagnosis and conversion prediction.

[8]. Luz, S., et al. (2020). Alzheimer’s Dementia Recognition through Spontaneous Speech (ADReSS Challenge). INTERSPEECH. A shared task based on spontaneous speech.

[9]. Venugopalan, J., Tong, L., Hassanzadeh, H. R., & Wang, M. D. (2021). Multimodal deep learning models for early detection of Alzheimer’s disease stage. Scientific Reports. Integrating multi-modality data outperforms single modality models.

[10]. Qiu, S., et al. (2022). Multimodal deep learning for Alzheimer’s disease dementia assessment. Nature Communications. Multiple diagnostic steps in successive fashion.

[11]. Castellano, G., et al. (2024). Automated detection of Alzheimer’s disease: A multi-modal approach with 3D MRI and amyloid PET (OASIS-3). Scientific Reports. Volumetric representations and integrating enhances performance.

[12]. Battineni, G., Chintalapudi, N., & Amenta, F. (2024). Machine learning driven by MRI for the classification of Alzheimer disease progression: Systematic review and meta-analysis. JMIR Aging.

[13]. Vermeulen, R. J., et al. (2025). Limited generalizability and high risk of bias in multivariable models predicting conversion risk from mild cognitive impairment to dementia: A systematic review. Alzheimer’s & Dementia. Limited generalizability and high risk of bias.


Cite this article

Chen,H. (2025). Alzheimer’s from Biology to Tests and AI Support. Applied and Computational Engineering,210,36-40.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of CONF-MLA 2025 Symposium: Intelligent Systems and Automation: AI Models, IoT, and Robotic Algorithms

ISBN:978-1-80590-567-7(Print) / 978-1-80590-568-4(Online)
Editor:Hisham AbouGrad
Conference date: 12 November 2025
Series: Applied and Computational Engineering
Volume number: Vol.210
ISSN:2755-2721(Print) / 2755-273X(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).

References

[1]. Jack, C. R., Jr., et al. (2018). NIA–AA research framework: Toward a biological definition of Alzheimer’s disease. Alzheimer’s & Dementia. Defined in vivo by biomarkers.

[2]. Jack, C. R., Jr., et al. (2024). Revised criteria for diagnosis and staging of Alzheimer’s disease. Alzheimer’s & Dementia. Present objective criteria for diagnosis and staging AD.

[3]. ADNI official site. (n.d.). A longitudinal, multi-center, observational study…validate biomarkers for Alzheimer’s disease clinical trials.

[4]. Folstein, M. F., Folstein, S. E., & McHugh, P. R. (1975). Mini-mental state: A practical method for grading the cognitive state of patients for the clinician. Journal of Psychiatric Research.

[5]. Nasreddine, Z. S., et al. (2005). The Montreal Cognitive Assessment (MoCA): A brief screening tool for mild cognitive impairment. Journal of the American Geriatrics Society.

[6]. Morris, J. C. (1993). The Clinical Dementia Rating (CDR): Current version and scoring rules. Neurology.

[7]. Basaia, S., et al. (2019). Automated classification of Alzheimer’s disease and mild cognitive impairment using a single MRI and deep neural networks. NeuroImage: Clinical. A single MRI deep model for diagnosis and conversion prediction.

[8]. Luz, S., et al. (2020). Alzheimer’s Dementia Recognition through Spontaneous Speech (ADReSS Challenge). INTERSPEECH. A shared task based on spontaneous speech.

[9]. Venugopalan, J., Tong, L., Hassanzadeh, H. R., & Wang, M. D. (2021). Multimodal deep learning models for early detection of Alzheimer’s disease stage. Scientific Reports. Integrating multi-modality data outperforms single modality models.

[10]. Qiu, S., et al. (2022). Multimodal deep learning for Alzheimer’s disease dementia assessment. Nature Communications. Multiple diagnostic steps in successive fashion.

[11]. Castellano, G., et al. (2024). Automated detection of Alzheimer’s disease: A multi-modal approach with 3D MRI and amyloid PET (OASIS-3). Scientific Reports. Volumetric representations and integrating enhances performance.

[12]. Battineni, G., Chintalapudi, N., & Amenta, F. (2024). Machine learning driven by MRI for the classification of Alzheimer disease progression: Systematic review and meta-analysis. JMIR Aging.

[13]. Vermeulen, R. J., et al. (2025). Limited generalizability and high risk of bias in multivariable models predicting conversion risk from mild cognitive impairment to dementia: A systematic review. Alzheimer’s & Dementia. Limited generalizability and high risk of bias.