PET-MADDi: Extended Multimodal Alzheimer's Disease Diagnosis Framework including PET scans
In practical contexts such as multimedia content analysis, medical diagnosis, and sensor data fusion, employing a multi-class model that processes and integrates diverse modalities presents notable advantages. This strategy allows the model to leverage the distinct advantages of each modality while mitigating their individual constraints. Moreover, the integration of multiple modalities amplifies the model's ability to handle complex real-world scenarios characterized by the dispersion of information across various sources and formats. Our work extends the Multi-modal Alzheimer’s Disease Diagnosis framework (MADDi) by integrating PET data alongside MRI, Genetic, and Clinical data to detect various stages of Alzheimer's Disease (AD). Through successful trials, we observe that combining features from different modalities yields 100% accuracy, surpassing the base model's performance. The Deep Learning (DL)/ Machine Learning (ML) models represent a significant advancement in AD diagnosis, promising early detection and improved patient care, highlighting the potential of DL/ML and Multi-modal data integration in medical research and healthcare.