Auto-Classification of Acne Lesions Using Multimodal Imaging

July 2013 | Volume 12 | Issue 7 | Original Article | 746 | Copyright © 2013

Sachin V. Patwardhan PhD,a Joseph R. Kaczvinsky PhD,b James F. Joa BA,b and Douglas Canfield BSa

aCanfield Scientific Inc, Fairfield, NJ
bThe Procter & Gamble Company, Cincinnati, OH

Abstract

Differentiating inflammatory and non-inflammatory acne lesions and obtaining lesion counts is pivotal part of acne evaluation. Manual lesion counting has reliably demonstrated the clinical efficacy of anti-acne products for decades. However, maintaining assessment consistency within and across acne trials is an important consideration since lesion counting can be subjective to the individual evaluators, and the technique has not been rigorously standardized.

VISIA-CR is a multi-spectral and multi-modal facial imaging system. It captures fluorescence images of Horn and Porphyrin, absorption images of Hemoglobin and Melanin, and skin texture and topography characterizing broad-spectrum polarized and non-polarized images. These images are analyzed for auto-classification of inflammatory and non-inflammatory acne lesion, measurement of erythema, and post-acne pigmentation changes. In this work the accuracy of this acne lesion auto-classification technique is demonstrated by comparing the auto-detected lesions counts with those counted by expert physicians. The accuracy is further substantiated by comparing and confirming the facial location and type of every auto-identified acne lesion with those identified by the physicians. Our results indicate a strong correlation between manual and auto-classified lesion counts (correlation coefficient >0.9) for both inflammatory and non inflammatory lesions.
This technology has the potential to eliminate the tedium of manual lesion counting, and provide an accurate, reproducible, and clinically relevant evaluation of acne lesions. As an aid to physicians it will allow development of a standardized technique for evaluating acne in clinical research, as well as accurately choosing treatment options for their patients according to the severity of a specific lesion type in clinical practice.

J Drugs Dermatol. 2013;12(7):746-756.

Purchase Original Article

Purchase a single fully formatted PDF of the original manuscript as it was published in the JDD.

Download the original manuscript as it was published in the JDD.

Contact a member of the JDD Sales Team to request a quote or purchase bulk reprints, e-prints or international translation requests.

To get access to JDD's full-text articles and archives, upgrade here.

Save an unformatted copy of this article for on-screen viewing.

Print the full-text of article as it appears on the JDD site.

→ proceed | ↑ close

INTRODUCTION

In a clinical research environment, acne severity is assessed by either manually counting individual types of lesions or by comparing the subject to a grading scale and providing an overall global assessment. Both methods have reliably demonstrated the clinical efficacy of anti-acne products for decades and are recommended by the US Food and Drug Administration.1 However, maintaining assessment consistency within and across acne trials is an important consideration since counting and grading approaches can be subject to the judgment of individual evaluators and have not been rigorously standardized. For example, the first reported acne grading system dates back to 1957,2 and a published review in 2002 found that more than 25 types of grading systems and more than 19 lesion counting techniques have been in use since then.3 Although inter-evaluator and intra-evaluator variability is the main reason for non-standardization, there has been limited research done in measuring this variability. Lucky et al found that when performing lesion counts, agreement among evaluators decreased with an increasing number of acne lesions, and overall the agreement between evaluators was low.4 For the sake of simplicity, many clinicians use a grading system to loosely categorize acne as mild, moderate, or severe. However, the reproducibility of this is also shown to be low.5 H. Barrett et al in 2009 surveyed published articles proposing the use of a novel means of assessing severity in clinical trials where the outcome measures were investigator-assessed. Of the nine clinical trials compared, only two offered a statistical measure of inter-evaluator reliability, while none provided evidence of intra-evaluator reliability, responsiveness, or validity.6

Tan et al assessed the reliability of acne lesion counts and global assessments (grading system) with a group of 11 dermatologists, five of whom had no formal training in acne grading or lesion counting. The dermatologists demonstrated that there was generally good reliability in lesion counting with a correlation coefficient >0.75, but the reliability was much lower for global assessment. They also claimed that the intra-evaluator reliability was much higher than the inter-evaluator reliability; however, no measures have been reported quantifying intra-evaluator reliability. Additionally, they also noted that formal training of the evaluator improved reliability in both lesion counting and global assessments.7 According to Allen et al, two 12-week long, double blind, placebo-controlled studies of acne treatments were performed using three judges and a total of 331 male college students as subjects. Global severity grades were assigned and papules, pustules, and comedones were counted every two weeks and the data were evaluated using Pearson’s

↑ back to top


Related Articles