[en] In a computer-aided diagnostic (CAD) system for skin lesion segmentation, variations in shape and size of the skin lesion makes the segmentation task more challenging. Lesion segmentation is an initial step in CAD schemes as it leads to low error rates in quantification of the structure, boundary, and scale of the skin lesion. Subjective clinical assessment of the skin lesion segmentation results provided by current state-of-the-art deep learning segmentation techniques does not offer the required results as per the inter-observer agreement of expert dermatologists. This study proposes a novel deep learning-based, fully automated approach to skin lesion segmentation, including sophisticated pre and postprocessing approaches. We use three deep learning models, including UNet, deep residual U-Net (ResUNet), and improved ResUNet (ResUNet++). The preprocessing phase combines morphological filters with an inpainting algorithm to eliminate unnecessary hair structures from the dermoscopic images. Finally, we used test time augmentation (TTA) and conditional random field (CRF) in the postprocessing stage to improve segmentation accuracy. The proposed method was trained and evaluated on ISIC-2016 and ISIC-2017 skin lesion datasets. It achieved an average Jaccard Index of 85.96% and 80.05% for ISIC-2016 and ISIC-2017 datasets, when trained individually. When trained on combined dataset (ISIC-2016 and ISIC-2017), the proposed method achieved an average Jaccard Index of 80.73% and 90.02% on ISIC-2017 and ISIC-2016 testing datasets. The proposed methodological framework can be used to design a fully automated computer-aided skin lesion diagnostic system due to its high scalability and robustness.
Disciplines :
Engineering, computing & technology: Multidisciplinary, general & others
Author, co-author :
Ashraf, Hassan ; Université de Liège - ULiège > Département d'aérospatiale et mécanique > Laboratoire des Systèmes Multicorps et Mécatroniques ; Department of Biomedical Engineering and Sciences, School of Mechanical and Manufacturing Engineering (SMME), National University of Sciences and Technology (NUST), Islamabad, Pakistan
Waris, Asim; Department of Biomedical Engineering and Sciences, School of Mechanical and Manufacturing Engineering (SMME), National University of Sciences and Technology (NUST), Islamabad, Pakistan
Ghafoor, Muhammad Fazeel; Department of Biomedical Engineering and Sciences, School of Mechanical and Manufacturing Engineering (SMME), National University of Sciences and Technology (NUST), Islamabad, Pakistan
Gilani, Syed Omer; Department of Biomedical Engineering and Sciences, School of Mechanical and Manufacturing Engineering (SMME), National University of Sciences and Technology (NUST), Islamabad, Pakistan
Niazi, Imran Khan; Department of Health Science and Technology, Aalborg University, Aalborg, Denmark ; Center of Chiropractic Research, New Zealand College of Chiropractic, Auckland, New Zealand ; Faculty of Health and Environmental Sciences, Health and Rehabilitation Research Institute, AUT University, Auckland, New Zealand
Language :
English
Title :
Melanoma segmentation using deep learning with test-time augmentations and conditional random fields
Rogers, H. W., Weinstock, M. A., Feldman, S. R. & Coldiron, B. M. Incidence estimate of nonmelanoma skin cancer (keratinocyte carcinomas) in the us population, 2012. JAMA Dermatol. 151, 1081–1086 (2015). DOI: 10.1001/jamadermatol.2015.1187
Newlands, C., Currie, R., Memon, A., Whitaker, S. & Woolford, T. Non-melanoma skin cancer: United kingdom national multidisciplinary guidelines. J. Laryngol. Otol. 130, S125–S132 (2016). DOI: 10.1017/S0022215116000554
Hagerty, J. R. et al. Deep learning and handcrafted method fusion: Higher diagnostic accuracy for melanoma dermoscopy images. IEEE J. Biomed. Health Inform. 23, 1385–1391 (2019). DOI: 10.1109/JBHI.2019.2891049
Marchetti, M. A. et al. Results of the 2016 international skin imaging collaboration international symposium on biomedical imaging challenge: Comparison of the accuracy of computer algorithms to dermatologists for the diagnosis of melanoma from dermoscopic images. J. Am. Acad. Dermatol. 78, 270–277 (2018). DOI: 10.1016/j.jaad.2017.08.016
Bi, L. et al. Dermoscopic image segmentation via multistage fully convolutional networks. IEEE Trans. Biomed. Eng. 64, 2065–2074 (2017). DOI: 10.1109/TBME.2017.2712771
Yuan, Y., Chao, M. & Lo, Y.-C. Automatic skin lesion segmentation using deep fully convolutional networks with jaccard distance. IEEE Trans. Med. Imaging 36, 1876–1886 (2017). DOI: 10.1109/TMI.2017.2695227
Lin, B. S., Michael, K., Kalra, S. & Tizhoosh, H. R. Skin lesion segmentation: U-nets versus clustering. In 2017 IEEE Symposium Series on Computational Intelligence (SSCI), 1–7 (IEEE, 2017).
Goyal, M., Yap, M. H. & Hassanpour, S. Multi-class semantic segmentation of skin lesions via fully convolutional networks. arXiv preprint arXiv:1711.10449 (2017).
Yuan, Y. Automatic skin lesion segmentation with fully convolutional-deconvolutional networks. arXiv preprint arXiv:1703.05165 (2017).
Zafar, K. et al. Skin lesion segmentation from dermoscopic images using convolutional neural network. Sensors 20, 1601 (2020). DOI: 10.3390/s20061601
Nathan, S. & Kansal, P. Lesion net-skin lesion segmentation using coordinate convolution and deep residual units. arXiv preprint arXiv:2012.14249 (2020).
Al-Masni, M. A., Al-Antari, M. A., Choi, M.-T., Han, S.-M. & Kim, T.-S. Skin lesion segmentation in dermoscopy images via deep full resolution convolutional networks. Comput. Methods Progr. Biomed. 162, 221–231 (2018). DOI: 10.1016/j.cmpb.2018.05.027
Codella, N. C. et al. Skin lesion analysis toward melanoma detection: A challenge at the 2017 international symposium on biomedical imaging (ISBI), hosted by the international skin imaging collaboration (ISIC). In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), 168–172 (IEEE, 2018).
Codella, N. C. et al. Deep learning ensembles for melanoma recognition in dermoscopy images. IBM J. Res. Dev. 61, 5–1 (2017). DOI: 10.1147/JRD.2017.2708299
Ronneberger, O., Fischer, P. & Brox, T. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, 234–241 (Springer, 2015).
Zhang, Z., Liu, Q. & Wang, Y. Road extraction by deep residual u-net. IEEE Geosci. Remote Sens. Lett. 15, 749–753 (2018). DOI: 10.1109/LGRS.2018.2802944
Jha, D. et al. Resunet++: An advanced architecture for medical image segmentation. In 2019 IEEE International Symposium on Multimedia (ISM), 225–2255 (IEEE, 2019).
Long, J., Shelhamer, E. & Darrell, T. Fully convolutional networks for semantic segmentation. 3431–3440 (2015).
Gutman, D. et al. Skin lesion analysis toward melanoma detection: A challenge at the international symposium on biomedical imaging (ISBI) 2016, hosted by the international skin imaging collaboration (ISIC). arXiv preprint arxiv:1605.01397 (2016).
Moshkov, N., Mathe, B., Kertesz-Farkas, A., Hollandi, R. & Horvath, P. Test-time augmentation for deep learning-based cell segmentation on microscopy images. Sci. Rep. 10, 1–7 (2020). DOI: 10.1038/s41598-020-61808-3
Radiuk, P. M. Impact of training set batch size on the performance of convolutional neural networks for diverse datasets. Inf. Technol. Manag. Sci. 20, 20–24 (2017).
Plath, N., Toussaint, M. Nakajima, S. Multi-class image segmentation using conditional random fields and global classification. In Proceedings of the 26th Annual International Conference on Machine Learning, 817–824 (2009).
Yu, L., Chen, H., Dou, Q., Qin, J. & Heng, P.-A. Automated melanoma recognition in dermoscopy images via very deep residual networks. IEEE Trans. Med. Imaging 36(4), 994–1004 (2016). DOI: 10.1109/TMI.2016.2642839
Li, Y. & Shen, L. Skin lesion analysis towards melanoma detection using deep learning network. Sensors 18, 556 (2018). DOI: 10.3390/s18020556
Bi, L., Kim, J., Ahn, E Feng, D. Automatic skin lesion analysis using large-scale dermoscopy images and deep residual networks. arXiv preprint arXiv:1703.04197 (2017).
Li, C. et al. A level set method for image segmentation in the presence of intensity inhomogeneities with application to MRI. IEEE Trans. Image Process. 20, 2007–2016 (2011). DOI: 10.1109/TIP.2011.2146190