Skip to main content

Research Repository

Advanced Search

MacularNet: Towards Fully Automated Attention-Based Deep CNN for Macular Disease Classification

Mandal, Bappaditya

MacularNet: Towards Fully Automated Attention-Based Deep CNN for Macular Disease Classification Thumbnail



<jats:title>Abstract</jats:title><jats:p>In this work, we propose an attention-based deep convolutional neural network (CNN) model as an assistive computer-aided tool to classify common types of macular diseases: age-related macular degeneration, diabetic macular edema, diabetic retinopathy, choroidal neovascularization, macular hole, and central serous retinopathy from normal macular conditions with the help of scans from optical coherence tomography (OCT) imaging. Our proposed architecture unifies refined deep pre-trained models using transfer learning with limited training data and a deformation-aware attention mechanism encoding crucial morphological variations appearing in the deformation of retinal layers, detachments from the subsequent layers, presence of fluid-filled regions, geographic atrophy, scars, cysts, drusen, to achieve superior macular imaging classification performance. The proposed attention module facilitates the base network to automatically focus on the salient features arising due to the macular structural abnormalities while suppressing the irrelevant (or no cues) regions. The superiority of our proposed method lies in the fact that it does not require any pre-processing steps such as retinal flattening, denoising, and selection of a region of interest making it fully automatic and end-to-end trainable. Additionally, it requires a reduced number of network model parameters while achieving higher diagnostic performance. Extensive experimental results, analysis on four datasets along with the ablation studies show that the proposed architecture achieves state-of-the-art performance.</jats:p>

Acceptance Date Aug 4, 2022
Publication Date Mar 22, 2022
Journal SN Computer Science
Print ISSN 2662-995X
Publisher URL
Additional Information The final version of this article and all relevant information related to it, including copyrights, can be found on the publisher website at;


You might also like

Downloadable Citations