A new generative approach for optical coherence tomography data scarcity: unpaired mutual conversion between scanning presets

Identificadores
Identificadores
Visualización o descarga de ficheros
Visualización o descarga de ficheros
Fecha de publicación
2023Título de revista
Medical and Biological Engineering and Computing
Tipo de contenido
Artigo
MeSH
Tomography, Optical Coherence | Diagnosis, Computer-Assisted | Image Processing, Computer-AssistedResumen
In optical coherence tomography (OCT), there is a trade-off between the scanning time and image quality, leading to a scarcity of high quality data. OCT platforms provide different scanning presets, producing visually distinct images, limiting their compatibility. In this work, a fully automatic methodology for the unpaired visual conversion of the two most prevalent scanning presets is proposed. Using contrastive unpaired translation generative adversarial architectures, low quality images acquired with the faster Macular Cube preset can be converted to the visual style of high visibility Seven Lines scans and vice-versa. This modifies the visual appearance of the OCT images generated by each preset while preserving natural tissue structure. The quality of original and synthetic generated images was compared using brisque. The synthetic generated images achieved very similar scores to original images of their target preset. The generative models were validated in automatic and expert separability tests. These models demonstrated they were able to replicate the genuine look of the original images. This methodology has the potential to create multi-preset datasets with which to train robust computer-aided diagnosis systems by exposing them to the visual features of different presets they may encounter in real clinical scenarios without having to obtain additional data. [Figure not available: see fulltext.].
Excepto si se señala otra cosa, la licencia del ítem se describe como Attribution 4.0 International (CC BY 4.0)
