Araştırma Makalesi
BibTex RIS Kaynak Göster

Tıbbi Görüntü Bölütleme için MMIAU-Net Mimarisi Önerisi

Yıl 2025, Cilt: 18 Sayı: 1, 19 - 30, 26.06.2025
https://doi.org/10.54525/bbmd.1537055

Öz

Derin öğrenme; başta tıbbi görüntü segmentasyonu olmak üzere birçok alanda başarılı sonuçlar elde eden bir yapay zeka yöntemidir. İnsan sağlığı için hayati önem taşıyan medikal görüntülerde, hassas analiz yapılarak kesin sonuçlara varılması mühimdir. Derin öğrenme yöntemleri yüksek hesaplama karmaşıklığı sayesinde gözden kaçabilecek en küçük hastalık detayını bile yakalayabilmektedir. U-Net derin öğrenme modeli bu alandaki yüksek başarısından dolayı en popüler mimaridir. Ancak segmentasyondaki doğruluk değerleri her veri kümesinde farklı sonuçlar verdiğinden performansının iyileştirilmesine her daim ihtiyaç vardır. Kapsamlı karşılaştırma yapabilmek için bu çalışmada, herkesin erişimine açık olan PH2, BOWL 2018, CVC-ClinicDB olmak üzere üç bağımsız tıbbi veri kümesi kullanılmış ve U-Net, U-Net++, Attention U-Net, Residual U-Net, Residual Attention U-Net, TransUNet ile Swin-Unet ve son olarak da bu çalışma için modifiye edilen Self Attention U-Net ve MMIAU-Net ile eğitimler yapılmıştır. Analizler sonucunda önerilen MMIAU-Net modelin daha az parametre kullanarak daha yüksek performanslara ulaştığı görülmüştür.

Kaynakça

  • Punn, N. S., Agarwal, S. Modality specific U‐Net variants for biomedical image segmentation: a survey, Artificial Intelligence Review, 2022.
  • Elnakib, A., Gimel’farb, G., Suri, J. S., El-Baz, A. Medical Image Segmentation: A Brief Survey, 2011.
  • Eker, A. G., Duru, N. Medikal Görüntü İşlemede Derin Öğrenme Uygulamaları, Acta Infologıca, vol. 5, no. 2, pp. 459-474, 2021.
  • Ronneberger, O., Fischer, P., Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation, Medical Image Computing and Computer-Assisted Intervention(MICCAI), 2015.
  • He, K., Zhang, X., Ren, S., Sun, J. Deep Residual Learning for Image Recognition, IEEE Conference on Computer Vision and Pattern Recognition(CVPR), 2016, pp. 770–778.
  • Liu, S., Zhuang, Z., Zheng, Y., Kolmanič, S. A VAN-Based Multi-Scale Cross-Attention Mechanism for Skin Lesion Segmentation Network, IEEE Access, · 2023.
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., Polosukhin, I. Attention Is All You Need. NIPS, 2017.
  • Dosovitskiy A., Beyer L. , Kolesnikov A. , Weissenborn D. , Zhai X., Unterthiner T., Dehghani M., Minderer M., Heigold G., Gelly S., Uszkoreit J., Houlsby N. An image is worth 16×16 words: Transformers for image recognition at scale, arXiv:2010.11929, 2021.
  • Chitty-Venkata, K. T., Emani, M., Vishwanath, V., Somani, A. K. Neural Architecture Search for Transformers: A Survey, IEEE Access, 2022.
  • Wu, H., Chen, S., Chen, G., Wang, W., Lei, B., and Wen, Z. FAT-Net: Feature Adaptive Transformers for Automated Skin Lesion Segmentation, Medical Image Analysis, 2022.
  • Siddique, N., Paheding, S., Elkin, C. P., Devabhaktuni, V. U-Net and Its Variants for Medical Image Segmentation: A Review of Theory and Applications, IEEE Access, vol. 9, pp. 82031-82057, 2021.
  • Szegedy C., Liu W., Jia Y., Sermanet P., Reed S., Anguelov D., Erhan D., Vanhoucke V., Rabinovich A. Going deeper with convolutions. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2015, pp. 1–9.
  • Mendonça, T., Ferreira, P. M., Marques, J. S., Marçal, A. R. S., Rozeira, J. PH² - a dermoscopic image database for research and benchmarking. 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 2013.
  • Caicedo, J. C., Goodman, A., Karhohs, K. W., Cimini, B. A., Ackerman, J., Haghighi, M., Heng, C., Becker, T., Doan, M., McQuin, C., Rohban, M., Singh, S., and Carpenter, A. E. Nucleus segmentation across imaging experiments: the 2018 data science bowl. Nature Methods, 16, 2019, pp. 1247–1253.
  • Yadavendra, and Chand, S. Semantic segmentation of human cell nucleus using deep U-Net and other versions of U-Net models. School of Computer and Systems Sciences, Jawaharlal Nehru University, New Delhi, India, 2022.
  • Bernal, J., Sánchez, F. J., Fernández-Esparrach, G., Gil, D., Rodríguez, C., Vilariño, F. WM-DOVA maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians. Computerized Medical Imaging and Graphics, 2015.
  • Zhou, Z., Siddiquee, M. M. R., Tajbakhsh, N., Liang, J. UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation, IEEE Transactions on Medical Imaging, 2019.
  • Oktay, O., Schlemper, J., Le Folgoc, L., Lee, M., Heinrich, M., Misawa, K., Mori, K., McDonagh, S., Hammerla, N. Y., Kainz, B., Glocker, B., Rueckert, D., Attention U-Net: Learning Where to Look for the Pancreas, 1st Conference on Medical Imaging with Deep Learning (MIDL 2018), 2018.
  • Hatipoğlu, N., Bilgin, G. Histopatolojik Görüntülerin U-Net Tabanlı Modeller Kullanılarak Bölütlenmesi, Medical Technologies Congress (TIPTEKNO) 2021.
  • Sariturk, B., Seker, D. Z. A Residual-Inception U-Net (RIU-Net) Approach and Comparisons with U-Shaped CNN and Transformer Models for Building Segmentation from High-Resolution Satellite Images, Sensors, 2022.
  • Chen, J., Lu, Y., Yu, Q., Luo, X., Adeli, E., Wang, Y., Lu, L., Yuille, A. L., Zhou, Y. TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation. 2021.
  • Tragakis, A., Kaul, C., Murray-Smith, R., Husmeier, D. The Fully Convolutional Transformer for Medical Image Segmentation. IEEE/CVF Winter Conference on Applications of Computer Vision, 2023.
  • Yao, W., Bai, J., Liao, W., Chen, Y., Liu, M., Xie, Y. From CNN to Transformer: A Review of Medical Image Segmentation Models, Journal of Imaging Informatics in Medicine,2023.
  • Faria, F. T. J., Moin, M. B., Debnath, P., Fahim, A. I., Shah, F. M. Explainable Convolutional Neural Networks for Retinal Fundus Classification and Cutting-Edge Segmentation Models for Retinal Blood Vessels from Fundus Images, 2024.
  • Cao, H., Wang, Y., Chen, J., Jiang, D., Zhang, X., Tian, Q., Wang, M. Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation. 2021.
  • Armand, T. P. T., Bhattacharjee, S., Kim, H. C., Choi, H. K. Transformers Effectiveness in Medical Image Segmentation: A Comparative Analysis of UNet-based Architectures, ICAIIC 2024
  • Nurhopipah, A., Murdiyanto, A. W., Astuti, T. A Pair of Inception Blocks in U-Net Architecture for Lung Segmentation, IEEE 5th International Conference on Information Technology, Information Systems and Electrical Engineering (ICITISEE), 2021.
  • Tong X., Wei J., Sun B., Su S., Zuo Z., Wu P. ASCU-NET: Attention Gate, Spatial and Channel Attention U-Net for Skin Lesion Segmentation. Diagnostics, 2021, pp. 11-29.
  • Wu H., Chen S., Chen G., Wang W., Lei B., Wen Z. FAT-Net: Feature adaptive transformers for automated skin lesion segmentation. Medical Image Analysis, 2022, pp. 76.
  • Zhang N., Yu L., Zhang D., Wu W., Tian S., Kang X. APT-Net: Adaptive encoding and parallel decoding transformer for medical image segmentation. Computers in Biology and Medicine, 2022, pp. 151.
  • Hu K., Lu J., Lee D., Xiong D., Chen Z. AS-Net: Attention Synergy Network for skin lesion segmentation. Expert Systems with Applications, 2022, pp. 1238-1259.
  • Hao S., Wu H., Jiang Y., Ji Z., Zhao L., Liu L., Ganchev I. GSCEU-Net: An end-to-end lightweight skin lesion segmentation model with feature fusion based on U-Net enhancements. Information, 2023, pp. 14-23.
  • Olimov B., Sanjar K., Din S., Ahmad A., Paul A., Kim J. FU-Net: fast biomedical image segmentation model based on bottleneck convolution layers. Multimedia Systems, 2021, pp. 1-14.
  • Roy K., Saha S., Banik D., Bhattacharjee D. Nuclei-Net: A multi-stage fusion model for nuclei segmentation in microscopy images. Innovations in Systems and Software Engineering, 2023, pp. 1-8.
  • Song H., Wang Y., Zeng S., Guo X., Li Z. OAU-net: Outlined Attention U-net for biomedical image segmentation. Biomedical Signal Processing and Control, 2023, pp. 79.
  • Ibtehaz N., Rahman M. MultiResUNet: Rethinking the U-Net architecture for multimodal biomedical image segmentation. Neural networks, 2020, pp. 74-87.
  • Chenarlogh A.V., Shabanzadeh A., Oghli G.M., Sirjani N., Moghadam F.S., Akhavan A., Tarzamni K.M. Clinical target segmentation using a novel deep neural network: double attention Res-U-Net. Scientific Reports, 2022, pp. 12.
  • Cai S., Xiao Y.i Wang Y. Two-dimensional medical image segmentation based on U-shaped structure. International Journal of Imaging Systems and Technology, 2024, pp. 34.
  • Jiang J., Wang M., Tian H., Cheng L., Liu Y. LV-UNet: A Lightweight and Vanilla Model for Medical Image Segmentation. arXiv preprint arXiv, 2024, pp. 2048.

Proposed Mixed Modified Inception and Self Attention U-Net for Medical Image Segmentation: MMIAU-Net

Yıl 2025, Cilt: 18 Sayı: 1, 19 - 30, 26.06.2025
https://doi.org/10.54525/bbmd.1537055

Öz

Deep learning is an artificial intelligence method that achieves successful results in many areas, especially medical image segmentation. It is crucial to reach definitive conclusions by performing precise analysis of medical images, which are vital for human health. Thanks to their maximum computational complexity, deep learning methods can capture even the smallest disease detail that may be overlooked. U-Net deep learning model is the most popular architecture due to its high success in this field. However, since the accuracy values in segmentation give different results in each dataset, there is always a need to improve its performance. In order to make a comprehensive comparison, three independent medical datasets were used in this study: PH2, BOWL 2018, CVC-ClinicDB, which are publicly accessible, and U-Net, U-Net++, Attention U-Net, Residual U-Net, Residual Attention U-Net, TransUNet and Swin-Unet, and also with Self Attention U-Net and MMIAU-Net, which were modified for this study. After the analysis, it was seen that the proposed model MMIAU-Net reached higher performances by using fewer parameters.

Kaynakça

  • Punn, N. S., Agarwal, S. Modality specific U‐Net variants for biomedical image segmentation: a survey, Artificial Intelligence Review, 2022.
  • Elnakib, A., Gimel’farb, G., Suri, J. S., El-Baz, A. Medical Image Segmentation: A Brief Survey, 2011.
  • Eker, A. G., Duru, N. Medikal Görüntü İşlemede Derin Öğrenme Uygulamaları, Acta Infologıca, vol. 5, no. 2, pp. 459-474, 2021.
  • Ronneberger, O., Fischer, P., Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation, Medical Image Computing and Computer-Assisted Intervention(MICCAI), 2015.
  • He, K., Zhang, X., Ren, S., Sun, J. Deep Residual Learning for Image Recognition, IEEE Conference on Computer Vision and Pattern Recognition(CVPR), 2016, pp. 770–778.
  • Liu, S., Zhuang, Z., Zheng, Y., Kolmanič, S. A VAN-Based Multi-Scale Cross-Attention Mechanism for Skin Lesion Segmentation Network, IEEE Access, · 2023.
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., Polosukhin, I. Attention Is All You Need. NIPS, 2017.
  • Dosovitskiy A., Beyer L. , Kolesnikov A. , Weissenborn D. , Zhai X., Unterthiner T., Dehghani M., Minderer M., Heigold G., Gelly S., Uszkoreit J., Houlsby N. An image is worth 16×16 words: Transformers for image recognition at scale, arXiv:2010.11929, 2021.
  • Chitty-Venkata, K. T., Emani, M., Vishwanath, V., Somani, A. K. Neural Architecture Search for Transformers: A Survey, IEEE Access, 2022.
  • Wu, H., Chen, S., Chen, G., Wang, W., Lei, B., and Wen, Z. FAT-Net: Feature Adaptive Transformers for Automated Skin Lesion Segmentation, Medical Image Analysis, 2022.
  • Siddique, N., Paheding, S., Elkin, C. P., Devabhaktuni, V. U-Net and Its Variants for Medical Image Segmentation: A Review of Theory and Applications, IEEE Access, vol. 9, pp. 82031-82057, 2021.
  • Szegedy C., Liu W., Jia Y., Sermanet P., Reed S., Anguelov D., Erhan D., Vanhoucke V., Rabinovich A. Going deeper with convolutions. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2015, pp. 1–9.
  • Mendonça, T., Ferreira, P. M., Marques, J. S., Marçal, A. R. S., Rozeira, J. PH² - a dermoscopic image database for research and benchmarking. 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 2013.
  • Caicedo, J. C., Goodman, A., Karhohs, K. W., Cimini, B. A., Ackerman, J., Haghighi, M., Heng, C., Becker, T., Doan, M., McQuin, C., Rohban, M., Singh, S., and Carpenter, A. E. Nucleus segmentation across imaging experiments: the 2018 data science bowl. Nature Methods, 16, 2019, pp. 1247–1253.
  • Yadavendra, and Chand, S. Semantic segmentation of human cell nucleus using deep U-Net and other versions of U-Net models. School of Computer and Systems Sciences, Jawaharlal Nehru University, New Delhi, India, 2022.
  • Bernal, J., Sánchez, F. J., Fernández-Esparrach, G., Gil, D., Rodríguez, C., Vilariño, F. WM-DOVA maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians. Computerized Medical Imaging and Graphics, 2015.
  • Zhou, Z., Siddiquee, M. M. R., Tajbakhsh, N., Liang, J. UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation, IEEE Transactions on Medical Imaging, 2019.
  • Oktay, O., Schlemper, J., Le Folgoc, L., Lee, M., Heinrich, M., Misawa, K., Mori, K., McDonagh, S., Hammerla, N. Y., Kainz, B., Glocker, B., Rueckert, D., Attention U-Net: Learning Where to Look for the Pancreas, 1st Conference on Medical Imaging with Deep Learning (MIDL 2018), 2018.
  • Hatipoğlu, N., Bilgin, G. Histopatolojik Görüntülerin U-Net Tabanlı Modeller Kullanılarak Bölütlenmesi, Medical Technologies Congress (TIPTEKNO) 2021.
  • Sariturk, B., Seker, D. Z. A Residual-Inception U-Net (RIU-Net) Approach and Comparisons with U-Shaped CNN and Transformer Models for Building Segmentation from High-Resolution Satellite Images, Sensors, 2022.
  • Chen, J., Lu, Y., Yu, Q., Luo, X., Adeli, E., Wang, Y., Lu, L., Yuille, A. L., Zhou, Y. TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation. 2021.
  • Tragakis, A., Kaul, C., Murray-Smith, R., Husmeier, D. The Fully Convolutional Transformer for Medical Image Segmentation. IEEE/CVF Winter Conference on Applications of Computer Vision, 2023.
  • Yao, W., Bai, J., Liao, W., Chen, Y., Liu, M., Xie, Y. From CNN to Transformer: A Review of Medical Image Segmentation Models, Journal of Imaging Informatics in Medicine,2023.
  • Faria, F. T. J., Moin, M. B., Debnath, P., Fahim, A. I., Shah, F. M. Explainable Convolutional Neural Networks for Retinal Fundus Classification and Cutting-Edge Segmentation Models for Retinal Blood Vessels from Fundus Images, 2024.
  • Cao, H., Wang, Y., Chen, J., Jiang, D., Zhang, X., Tian, Q., Wang, M. Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation. 2021.
  • Armand, T. P. T., Bhattacharjee, S., Kim, H. C., Choi, H. K. Transformers Effectiveness in Medical Image Segmentation: A Comparative Analysis of UNet-based Architectures, ICAIIC 2024
  • Nurhopipah, A., Murdiyanto, A. W., Astuti, T. A Pair of Inception Blocks in U-Net Architecture for Lung Segmentation, IEEE 5th International Conference on Information Technology, Information Systems and Electrical Engineering (ICITISEE), 2021.
  • Tong X., Wei J., Sun B., Su S., Zuo Z., Wu P. ASCU-NET: Attention Gate, Spatial and Channel Attention U-Net for Skin Lesion Segmentation. Diagnostics, 2021, pp. 11-29.
  • Wu H., Chen S., Chen G., Wang W., Lei B., Wen Z. FAT-Net: Feature adaptive transformers for automated skin lesion segmentation. Medical Image Analysis, 2022, pp. 76.
  • Zhang N., Yu L., Zhang D., Wu W., Tian S., Kang X. APT-Net: Adaptive encoding and parallel decoding transformer for medical image segmentation. Computers in Biology and Medicine, 2022, pp. 151.
  • Hu K., Lu J., Lee D., Xiong D., Chen Z. AS-Net: Attention Synergy Network for skin lesion segmentation. Expert Systems with Applications, 2022, pp. 1238-1259.
  • Hao S., Wu H., Jiang Y., Ji Z., Zhao L., Liu L., Ganchev I. GSCEU-Net: An end-to-end lightweight skin lesion segmentation model with feature fusion based on U-Net enhancements. Information, 2023, pp. 14-23.
  • Olimov B., Sanjar K., Din S., Ahmad A., Paul A., Kim J. FU-Net: fast biomedical image segmentation model based on bottleneck convolution layers. Multimedia Systems, 2021, pp. 1-14.
  • Roy K., Saha S., Banik D., Bhattacharjee D. Nuclei-Net: A multi-stage fusion model for nuclei segmentation in microscopy images. Innovations in Systems and Software Engineering, 2023, pp. 1-8.
  • Song H., Wang Y., Zeng S., Guo X., Li Z. OAU-net: Outlined Attention U-net for biomedical image segmentation. Biomedical Signal Processing and Control, 2023, pp. 79.
  • Ibtehaz N., Rahman M. MultiResUNet: Rethinking the U-Net architecture for multimodal biomedical image segmentation. Neural networks, 2020, pp. 74-87.
  • Chenarlogh A.V., Shabanzadeh A., Oghli G.M., Sirjani N., Moghadam F.S., Akhavan A., Tarzamni K.M. Clinical target segmentation using a novel deep neural network: double attention Res-U-Net. Scientific Reports, 2022, pp. 12.
  • Cai S., Xiao Y.i Wang Y. Two-dimensional medical image segmentation based on U-shaped structure. International Journal of Imaging Systems and Technology, 2024, pp. 34.
  • Jiang J., Wang M., Tian H., Cheng L., Liu Y. LV-UNet: A Lightweight and Vanilla Model for Medical Image Segmentation. arXiv preprint arXiv, 2024, pp. 2048.
Toplam 39 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Bilgi Sistemleri (Diğer)
Bölüm Araştırma Makaleleri
Yazarlar

Müberra Nur Akçaman

Tolga Ensari 0000-0003-0896-3058

Ahmet Sertbaş 0000-0001-8166-1211

Erken Görünüm Tarihi 11 Haziran 2025
Yayımlanma Tarihi 26 Haziran 2025
Gönderilme Tarihi 22 Ağustos 2024
Kabul Tarihi 29 Kasım 2024
Yayımlandığı Sayı Yıl 2025 Cilt: 18 Sayı: 1

Kaynak Göster

IEEE M. N. Akçaman, T. Ensari, ve A. Sertbaş, “Tıbbi Görüntü Bölütleme için MMIAU-Net Mimarisi Önerisi”, bbmd, c. 18, sy. 1, ss. 19–30, 2025, doi: 10.54525/bbmd.1537055.