Download PDFOpen PDF in browserAddressing Challenges in Image Translation for Contrast-Enhanced Mammography Using Generative Adversarial NetworksEasyChair Preprint 1547315 pages•Date: November 26, 2024AbstractMedical imaging is a cornerstone of modern healthcare, facilitating early diagnosis and the development of efficient treatment plans. Breast imaging includes different imaging modalities, including mammography and MRI, each encompassing unique information. Unfortunately, improving diagnostic performance can be accompanied by an increase in patient-related risks. Specifically, Contrast-enhanced mammography (CEM) offers better performance while exposing women to the risk of adverse reactions from the contrast agents used for it. To reduce these risks, deep learning solutions have become one of the promising research frontiers in recent years. In image-to-image translation, a mapping function is learned to transform a given image from a source domain to a target domain. In medical imaging, the most common solutions are based on GANs, such as pix2pix. When applied to CEM, we found that pix2pix encounters specific challenges due to low data quality, insufficient model capacity, and domain-derived requirements. Thus, these models have low performance out-of-the-box. In this paper, we highlight these specific challenges, propose tailored evaluation strategies, and present preliminary results on a novel dataset, showcasing the need for specialized approaches in medical imaging translation. Keyphrases: Breast Imaging, Generative Adversarial Models, image-to-image translation
|