
Skin lesion segmentation of dermoscopy images using U-Net
- 1 Department of Computer Science Beijing University of Posts and Telecommunications Beijing, 100876, China City, Country
* Author to whom correspondence should be addressed.
Abstract
Skin cancer is one of the most threatening cancers as reported and has been on the increase over the past 10 years. The traditional methods of skin cancer segmentation are time-consuming and inefficient. U-Net is a powerful and accurate way of self-segmentation in the medical field. In order to solve this problem, this paper proposes a U-Net skin cancer segmentation system that can provide results and feedback quickly, accurately and intelligently. It is composed of two parts: Skin Image Analysis Module and Skin Image Segmentation Module. In the skin image analysis module, the system learns segmentation from the training set images, and verifies the correctness of learning from part of the images. In Skin Image Segmentation Module, the system segments all the images in the test set folder. Among several experiments, the system using GPU training and learning with 100 images of ISIC dataset after 10 epochs has the training accuracy of 0.9085, while the validation accuracy is 0.9536. The system allows users to upload their lesion images to a test folder to obtain reliable segmentation results in a timely manner, thereby improving the survival rate of potential patients.
Keywords
Skin cancer, Skin lesion segmentation, U-Net, Convolutional neural network
[1]. Abuared,N., Panthakkan,A., Al-Saad,M., Amin,S.A. and Mansoor,W. (2020) Skin Cancer Classification Model Based on VGG 19 and Transfer Learning. In: 2020 3rd International Conference on Signal Processing and Information Security (ICSPIS). pp. 1-4.
[2]. Chulan Ren, Ning Wang, Yang Zhang. Review of medical image segmentation methods [J]. Network Security Technology & Application,2022(02):49-50.
[3]. Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation[C]. Proceedings of the IEEE conference on computer vision and pattern recognition, 2015: 3431-3440.
[4]. Ronneberger O, Fischer P, Brox T.U-net: Convolutional Networks for Biomedical Image Segmentation[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention.Munich: MICCA, 2015: 234-241.
[5]. Ogawa K , Fukushima M , Kubota K , et al. Computer-aided diagnostic system for diffuse liver diseases with ultrasonography by neural networks[J]. IEEE Transactions on Nuclear Science, 2002, 45(6):3069-3074.
[6]. Johr R H . Dermoscopy: alternative melanocytic algorithms—the ABCD rule of dermatoscopy, menzies scoring method, and 7-point checklist[J]. 2002, 20(3):0-247.
[7]. Zhou Z , Siddiquee M M R , Tajbakhsh N , et al. UNet++: A Nested U-Net Architecture for Medical Image Segmentation[C]// 4th Deep Learning in Medical Image Analysis (DLMIA) Workshop. 2018.
[8]. Xiao, Xiao, Shen Lian, Zhiming Luo, and Shaozi Li. “Weighted Res-U-Net for High-Quality Retina Vessel Segmentation.” In 2018 9th International Conference on Information Technology in Medicine and Education (ITME), pp. 327-331. IEEE, 2018.
[9]. Guan, Steven, Amir Khan, Siddhartha Sikdar, and Parag V. Chitnis. “Fully Dense U-Net for 2D Sparse Photoacoustic Tomography Artifact Removal.” arXiv preprint arXiv:1808.10848 (2018).
[10]. Oktay, Ozan, Jo Schlemper, Loic Le Folgoc, Matthew Lee, Mattias Heinrich, Kazunari Misawa, Kensaku Mori et al. “Attention U-Net: learning where to look for the pancreas.” arXiv preprint arXiv:1804.03999 (2018).
[11]. Jiping Wang,Xin Li,Chuanxi Chen, etc. Application of 3D U-Net in automatic segmentation of organs at risk in nasopharyngeal carcinoma [J]. Chinese Medical Equipment Journal,2020,41(11):17- 20+45
[12]. Lianbo Zhong. Comparison of GPU and CPU [J]. Technology and Market, 2009, 16(009):13-14.
[13]. Atlas P . Section of Biomedical Image Analysis (SBIA).
Cite this article
Wang,D. (2023). Skin lesion segmentation of dermoscopy images using U-Net. Applied and Computational Engineering,6,840-847.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Volume title: Proceedings of the 3rd International Conference on Signal Processing and Machine Learning
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).