[158049] |
Title: Deep learning for calcium segmentation in intravascular ultrasound images:. |
Written by: L. Bargsten and K. A. Riedl and T. Wissel and F. J. Brunner and K. Schaefers and M. Grass and S. Blankenberg and M. Seiffert and A. Schlaefer |
in: <em>Current Directions in Biomedical Engineering</em>. (2021). |
Volume: <strong>7</strong>. Number: (1), |
on pages: 96-100 |
Chapter: |
Editor: |
Publisher: |
Series: |
Address: |
Edition: |
ISBN: |
how published: |
Organization: |
School: |
Institution: |
Type: |
DOI: doi:10.1515/cdbme-2021-1021 |
URL: https://doi.org/10.1515/cdbme-2021-1021 |
ARXIVID: |
PMID: |
Note:
Abstract: Knowing the shape of vascular calcifications is crucial for appropriate planning and conductance of percutaneous coronary interventions. The clinical workflow can therefore benefit from automatic segmentation of calcified plaques in intravascular ultrasound (IVUS) images. To solve segmentation problems with convolutional neural networks (CNNs), large datasets are usually required. However, datasets are often rather small in the medical domain. Hence, developing and investigating methods for increasing CNN performance on small datasets can help on the way towards clinically relevant results. We compared two state-of-the-art CNN architectures for segmentation, U-Net and DeepLabV3, and investigated how incorporating auxiliary image data with vessel wall and lumen annotations improves the calcium segmentation performance by using these either for pre-training or multi-task training. DeepLabV3 outperforms U-Net with up to 6.3 % by means of the Dice coefficient and 36.5 % by means of the average Hausdorff distance. Using auxiliary data improves the segmentation performance in both cases, whereas the multi-task approach outperforms the pre-training approach. The improvements of the multi-task approach in contrast to not using auxiliary-data at all is 5.7 % for the Dice coefficient and 42.9 % for the average Hausdorff distance. Automatic segmentation of calcified plaques in IVUS images is a demanding task due to their relatively small size compared to the image dimensions and due to visual ambiguities with other image structures. We showed that this problem can generally be tackled by CNNs. Furthermore, we were able to improve the performance by a multi-task learning approach with auxiliary segmentation data