Use of artificial intelligence to diagnose lagophthalmos

In a recent study published in Scientific Reports, researchers used a convolutional neural network (CNN) to automate lagophthalmos diagnosis.

Study: Diagnosing lagophthalmos using artificial intelligence. Image Credit: Joyseulay/Shutterstock.com

Background

Lagophthalmos is characterized by an inadequate or aberrant eyelid closure, which increases the risk of corneal ulcers and blindness.

It is a frequent symptom of many diseases and is of three types: cicatricial (CL), paralytic lagophthalmos (PL), and nocturnal (NL). Complete closure of eyelids is required to maintain steady tear films and a hydrated ocular surface.

However, among individuals with CL, PL, and NL, the ear fluid does not sufficiently wet the eyes, resulting in drying out that can cause keratitis and keratopathy, leading to corneal ulcers, decreased vision, or blindness. Early detection and tailored therapy are critical to avoiding complications.

Algorithm-based tools with automated diagnostics have various advantages, including the lack of expert knowledge required, the capacity to establish or reject suspected clinical diagnoses in ambiguous patient situations, and the ability to substantiate or refute suspected clinical diagnoses in ambiguous patient instances.

About the study

In the present study, researchers demonstrated a new approach that uses still-image processing to discover visual patterns and, eventually, diagnose lagophthalmos using CNN technology.

The team studied 30 lagophthalmos patients at the Regensburg University Hospital in Germany between June 2019 and May 2021. They obtained data from 10 disease-free adults who served as a control group, and the training dataset included 826 photos.

Each validation dataset and testing dataset had 91 patient photographs. After 17 minutes of training, the model had mean losses of 0.3 and 0.4 and final losses of 0.3 and 0.2, respectively.

The researchers obtained a 93% testing accuracy with a 0.2-point loss. The study included 18-year-old patients with signs of lagophthalmos who agreed to therapy and study participation. The researchers excluded patients who could not provide informed permission since they could not speak German or were illiterate.

The researchers used Python 3.7 and standard machine learning and data science modules to train and assess a convolutional neural network (CNN) using 1,008 patient photos.

The CNN was developed as a lightweight CNN with three convolutional layers to reduce the number of parameters in the network. The researchers included a dropout layer as a regularization strategy to prevent overfitting.

They trained the CNN across 64 epochs, with more epochs avoided to avoid overfitting. They used the testing set to assess the model with the best validation accuracy throughout training.

The researchers built the model utilizing rectified linear units (ReLUs) with each of the three convolutional layers, a max-pool layer between each convolutional layer, a flattened layer, and two dense layers to correlate the association between distinct picture characteristics with a specific output.

They trained the model weights on no testing data without using synthetically created training images to retain the complexity of dealing with genuine patient photos and prevent overfitting the model.

Results

The model performed admirably concerning the training, validating, and testing accuracies, with mean and final accuracies of 86% and 91% in the training dataset, respectively, and mean and absolute accuracies of 88% and 98% in the validation dataset, respectively, over 64 epochs.

Mean losses of 0.3 and 0.4 and final losses of 0.3 and 0.2 were observed during training and validation, respectively. The validation accuracy and recall values were 1.0 and 0.9, yielding F1 scores of 0.97. The model specificity in the validation dataset was 1.0, with an area under the receiver operating characteristic curve (AUROC) value of 0.998.

When categorizing the testing set, the final architecture of the model obtained 93% accuracy with a 0.20-point loss. The AUROC value for model testing was 0.96 with a specificity of 0.98; moreover, the recall was 0.8 with a 0.96-point accuracy.

The researchers trained the model for 17 minutes, during which accuracy metrics consistently increased while related losses decreased. The findings indicate that the model's accuracy in categorizing the training dataset and testing dataset examples improved with time.

The validation dataset accuracy peaked at epoch 42, whereas training accuracy peaked at epoch 56, indicating that the model is still learning and refining with every epoch.

Of interest, with half-open eyelids, the model demonstrated robust diagnostic skills, indicating the capacity of the model to efficiently identify and categorize essential aspects despite potential differences in input data presentation.

Model accuracy in the training dataset remained somewhat lower than the validation dataset accuracy during most epochs, demonstrating that the CNN model generalized effectively to unknown data. An exception was noted during epoch 39 when the training dataset accuracy reached 83%.

Conclusions

Overall, the study findings reveal a unique use of artificial intelligence (CNN) for speedy and accurate diagnosis of lagophthalmos.

The CNN-based strategy combines anti-overfitting tactics, quick training timeframes, and high accuracy levels with the potential to improve medical efficiency and patient care. The validation dataset accuracy (98%) outperformed the training dataset accuracy (91%).

The modest depth of three CNN layers contributed to model generalizability. In the majority of cases, the model predicted correctly, but some output was erroneous, indicating that additional improvements are required.

Throughout 64 epochs, the link between the training dataset and validation dataset accuracy was noted, with training accuracy reaching 87% and validation dataset accuracy reaching 87%. The model performed slightly worse, with a larger loss value of 0.2 in the testing dataset.

Journal reference:
  • Knoedler, L., Alfertshofer, M., Simon, S., et al. (2023) Diagnosing lagophthalmos using artificial intelligence. Sci Rep 13, 21657. doi: https://doi.org/10.1038/s41598-023-49006-3. https://www.nature.com/articles/s41598-023-49006-3

Posted in: Device / Technology News | Medical Procedure News | Medical Science News | Medical Research News | Healthcare News

Tags: Artificial Intelligence, Blindness, Diagnostic, Diagnostics, Ear, Hospital, Keratitis, Lagophthalmos, Machine Learning, Technology

Comments (0)

Written by

Pooja Toshniwal Paharia

Dr. based clinical-radiological diagnosis and management of oral lesions and conditions and associated maxillofacial disorders.