Poster Presentation Australasian Society for Dermatology Research Annual Scientific Meeting 2024

Automated photodamage assessment from 3D total body photography for an objective assessment of melanoma risk (#73)

Sam Kahler 1 , Siyuan Yan 2 , Adam Mothershaw 3 , Chantal Rutjes 1 , Clare Primiero 1 , Dilki Jasasinghe 3 , Monika Janda 3 , Zongyuan Ge 2 , Peter H Soyer 1 , Brigid Betz-Stablein 1
  1. Frazer Insitute, The University of Queensland, Dermatology Research Centre, Brisbane, Queensland, Australia
  2. Monash eResearch Center, Monash University, Melbourne, Victoria, Australia
  3. Centre for Health Services Research, Faculty of Medicine, The University of Queensland, Brisbane, Queensland, Australia

Background:

Ultraviolet radiation is the primary environmental risk factor for melanoma yet skin photodamage is rarely included in melanoma risk prediction models. This is largely due to a reliance on self-reporting which is prone to bias and has limited reproducibility. To address this challenge, we developed and validated a clinical photonumeric scale, applied the scale to achieve accurate image annotation, and trained a convolutional neural network (CNN) to automate photodamage assessment.

 

Methods:

Total body photography of 76 participants were cropped into 19,481 image tiles encompassing the complete cutaneous surface area. The clinical photonumeric score was developed for the main task of classifying photodamage and the auxiliary task of pigmentation.  Photodamage was annotated by two dermatologically-experienced students and two laypeople with agreement assessed using Cohen’s Kappa. Annotated images were used train a convolutional neural network with multi-task learning to automate photodamage assessment.

 

Results:

Inter-rater reliability of the photonumeric scale demonstrated excellent agreement between students (k=0.81), good-excellent between students and laypeople (k=0.83, 0.77), and good between laypeople (k=0.73). The CNN performed with a cumulatively accuracy of 83% including class-specific accuracy of 90% for mild, 69% for moderate, and 87% for severe photodamage. CNN predictions demonstrated excellent-good agreement across each body site (k=0.66-0.83) compared to ground truth (student annotation). A user interface was developed to overlay CNN-labelled photodamage as heatmaps on 3D-TBP on patient avatars to facilitate clinical interpretation.

 

Conclusions:

Our CNN provides a novel tool to objectively report the severity and distribution of photodamage from total body photography as a phenotypic risk factor for melanoma. To our knowledge this is the first AI model for the automatic assessment of photodamage that may be incorporated into existing or new risk prediction models and assist with stratifying patients for targeted melanoma surveillance.