Araştırma Makalesi
BibTex RIS Kaynak Göster

Tüm sis gidericiler aynı değildir: Sis gideriminin nesne tespiti üzerindeki etkisinin ortaya çıkarılması

Yıl 2025, Cilt: 31 Sayı: 3, 373 - 383, 30.06.2025

Öz

Sis giderimi insansız hava araçları, trafik kontrolü ve otonom sürüş gibi uygulamalarda hayati önemdeki görünürlüğü iyileştirmek amacıyla atmosferik pus ve saçılım etkilerini ortadan kaldırmayı hedefleyen hesaplamalı fotografinin önemli bir dalıdır. Ancak bu alandaki çalışmaların birçoğu geliştirilen algoritmanın nesne tespiti (NT) bağlamında değerlendirilmesinden yoksundur. Bu çalışmada üstün performansıyla bilinen YOLOv8 üzerinden son teknoloji ürünü çeşitli sis giderici yöntemlerin (C2PNet, D4, Dehamer, gUNet) katkısının NT bağlamında ölçülmesi ve değerlendirilmesini amaçlanmıştır. Bu amaçla veri kaynağı olarak VisDrone-DET veri kümesindeki 548 sissiz gökyüzü görüntüsü içeren test kısmından faydalandık. Daha kapsamlı bir değerlendirme için farklı sis seviyeleri ve çözünürlükler altında NT bağlamında bu yaklaşımları değerlendirdik. Sisli ve temiz imgeleri doğal olarak aynı anda elde etmek mümkün olmadığındavn, (1) değişen sis yoğunlukları içeren sentetik sisli imgeler oluşturduk ve (2) 640p ve 1280p çözünürlüklerinde yeniden boyutlandırdık. Ardından (i) sissiz kesin referans, (ii) üç farklı sislendirilmiş sürüm ve (iii) bunların sisi giderilmiş muadillerinde YOLO8 ve YOLO10 modelini kullanarak NT performansını çeşitli ölçütler üzerinden değerlendirdik. Deneylerimiz GCANet ile GridDehazeNet'ten esinlenen ve U-Net modelinin bir varyantını içeren gUNET yaklaşımının NT performansı açısından diğerlerinden daha iyi başarım gösterdiğini ortaya koymuştur. Dehamer yöntemi saşırtıcı şekilde üretilen “artifakt” nedeniyle NT başarımını olumsuz etkilemiştir. Bu değerlendirme ilgili yöntemlerin etkinliği hakkında değerli bulgular sunmakla kalmayarak sisli hava koşullarında NT söz konusu olduğunda bu yöntemlerden nasıl faydalanılacağına da ışık tutmaktadır.

Kaynakça

  • [1] Yang Y, Wang C, Liu R, Zhang L, Guo X, Tao D. “Selfaugmented unpaired image dehazing via density and depth decomposition”. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022.
  • [2] Li B, Ren W, Fu D, Tao D, Feng D, Zeng W, Wang Z. “Benchmarking single-image dehazing and beyond”. IEEE Transactions on Image Processing, 28(1), 492-505, 2019.
  • [3] Chahal KS, Dey K. “A survey of modern object detection literature using deep learning”. arXiv, 2018. https://arxiv.org/pdf/1808.07256
  • [4] Medium, “Synthesize Hazy/Foggy Images using Monodepth and Atmospheric Scattering Model”. https://towardsdatascience.com/synthesize-hazy-foggyimage-using-monodepth-and-atmospheric-scatteringmodel-9850c721b74e (08.08.2024).
  • [5] Tran LA, Do TD, Park DC, Le MH. “Robustness enhancement of object detection in advanced driver assistance systems (ADAS)”. https://arxiv.org/pdf/2105.01580. arXiv, 2021.
  • [6] Song Y, He Z, Qian H, Du X. “Vision transformers for single image dehazing”. IEEE Transactions on Image Processing, 32, 1927-1941, 2023.
  • [7] Song Y, Zhou Y, Qian H, Du X. “Rethinking performance gains in image dehazing networks”. arXiv 2022. https://arxiv.org/pdf/2209.11448
  • [8] Thakur N, Nagrath P, Jain R, Saini D, Sharma N, Hemanth J. “Object detection in deep surveillance”. Research Square, 2021. https://doi.org/10.21203/rs.3.rs-901583/v1
  • [9] Ali S, Abdullah Athar A, Ali M, Hussain A, Kim HC. "Computer vision-based military tank recognition using object detection technique: an application of the YOLO framework". 1st International Conference on Advanced Innovations in Smart Cities, Jeddah, Saudi Arabia, 23-25 January 2023.
  • [10] Rahadianti L, Azizah A Y, Deborah H. "Evaluation of the quality indicators in dehazed images: color, contrast, naturalness, and visual pleasingness". Heliyon, 7(9), 1-12, 2021.
  • [11] Wu H, Qu Y, Lin S, Zhou JJ, Qiao R, Zhang Z, Xie Y, Ma L. “Contrastive learning for compact single image dehazing”. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, 19–25 June 2021.
  • [12] Yang Y, Wang C, Liu R, Zhang L, Guo X, Tao D. “Selfaugmented unpaired image dehazing via density and depth decomposition”, IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022.
  • [13] Guo C, Yan Q, Anwar S, Cong R, Ren W, Li C. “Image dehazing transformer with transmission-aware 3D position embedding”. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022.
  • [14] Zheng Y, Zhan J, He S, Dong J, Du Y. “Curricular contrastive regularization for physics-aware single image dehazing”. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, Canada, 18-22 June 2023.
  • [15] He K, Sun J, Tang X. “Single image haze removal using dark channel prior”. 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20-25 June 2009.
  • [16] Berman D, Treibitz T, Avidan S. “Non-local image dehazing”. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June-1 July 2016.
  • [17] Li B, Peng X, Wang Z, Xu J, Feng D. “AOD-Net: all-in-one dehazing network”. IEEE International Conference on Computer Vision, Venice, Italy, 22-29 October 2017.
  • [18] Ancuti CO, Ancuti C. “Single image dehazing by multi-scale fusion”. IEEE Transactions on Image Processing, 22(8), 3271-3282, 2013.
  • [19] Ultralytics. “VisDrone”. https://docs.ultralytics.com/tr/datasets/detect/visdrone /#citations-and-acknowledgments (08.02.2024).
  • [20] GitHub. “VisDrone/VisDrone-Dataset”. https://github.com/VisDrone/VisDrone-Dataset (08.07.2024).
  • [21] GitHub. “tranleanh/haze-synthesis”. https://github.com/tranleanh/haze-synthesis (09.05.2024).
  • [22] Wang A, Chen H, Liu L, Chen K, Lin Z, Han J, Ding, G. “Yolov10: Real-Time End-To-End Object Detection”. arXiv 2024. https://arxiv.org/pdf/2405.14458
  • [23] Hussain M. “YOLO-v1 to YOLO-v8, the rise of YOLO and its complementary nature toward digital manufacturing and industrial defect detection”. Machines, 11(7), 677, 2023.
  • [24] Roboflow Blog. “Your Comprehensive Guide to the YOLO Family of Models”. https://blog.roboflow.com/guide-toyolo-models/ (08.02.2024).
  • [25] Ghosh A. “YOLOv10: The Dual-Head OG of YOLO Series”. https://learnopencv.com/yolov10/ (01.07.2024).
  • [26] Marium A, Srinivasan D G, Shetty S A. “Literature survey on object detection using YOLO”. International Research Journal of Engineering and Technology, 7(6), 3082-3088, 2020.
  • [27] Jiang P, Ergu D, Liu F, Cai Y, Ma B. “A review of YOLO algorithm developments”. Procedia Computer Science, 199, 1066-1073, 2022.
  • [28] Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu CY, Berg, AC. “SSD: single shot multibox detector”. Computer VisionECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016.
  • [29] Deng C, Wang M, Liu L, Liu Y, Jiang Y. “Extended feature pyramid network for small object detection”. IEEE Transactions on Multimedia, 24, 1968-1979, 2022.
  • [30] Hnewa M, Radha H. “Multiscale domain adaptive YOLO for cross-domain object detection”. 2021 IEEE International Conference on Image Processing, Anchorage, Alaska, USA, 19-22 September 2021.
  • [31] Sirisha U, Praveen SP, Srinivasu PN, Barsocchi P, Bhoi AK. “Statistical analysis of design aspects of various YOLObased deep learning models for object detection”. International Journal of Computational Intelligence Systems, 16(126), 1-29, 2023.
  • [32] GitHub “Li-Chongyi/Dehamer”. https://github.com/LiChongyi/Dehamer (08.02.2024).
  • [33] Wu B, Xu C, Dai X, Wan A, Zhang P, Yan Z, Tomizuka M, Gonzalez J, Keutzer K, Vajda P. “Visual transformers: tokenbased image representation and processing for computer vision”. arXiv 2020. https://arxiv.org/pdf/2006.03677
  • [34] Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I. “Attention is all you need”. arXiv, 2017. https://arxiv.org/pdf/1706.03762
  • [35] Dosovitskiy A. “An image is worth 16x16 words: transformers for image recognition at scale”. arXiv, 2020. https://arxiv.org/pdf/2010.11929
  • [36] GitHub. “IDKiro/gUNet”. https://github.com/IDKiro/gUNet (08.02.2024).
  • [37] Shah T. “Measuring object detection models - mAP - what is mean average precision?”. https://tarangshah.com/blog/2018-01-27/what-is-mapunderstanding-the-statistic-of-choice-for-comparingobject-detection-models/ (08.02.2024).
  • [38] LearnOpenCV. “Mean average precision (mAP) in object detection”. https://learnopencv.com/mean-averageprecision-map-object-detection-model-evaluationmetric/ (08.02.2024).
  • [39] Altun M, Türker M. “Vehicle detection in urban areas from very high resolution UAV color images”. Pamukkale University Journal of Engineering Sciences, 26(2), 371-384, 2020.

Not all fog removers are equal: Unmasking the impact of dehazing on object detection

Yıl 2025, Cilt: 31 Sayı: 3, 373 - 383, 30.06.2025

Öz

Dehazing is an important branch of computational photography aiming to enhancing image clarity by removing atmospheric haze and scattering effects, crucial for improving visibility in applications such as unmanned aerial vehicles, traffic control, and autonomous driving. However, most of the studies in this particular field lack an assessment of the developed algorithm in context of object detection (OD). In this study, we aim to quantify and evaluate the contribution of several stateof-the-art dehazing methods (C2PNet, D4, Dehamer, gUNet) on OD using YOLOv8, known for its superior performance. For this purpose, we utilized the test portion of the VisDrone-DET dataset including 548 haze-free aerial images as the data source. For a more comprehensive assessment, we evaluated these approaches to object detection under different haze levels and resolutions. Since it is inherently impossible to obtain hazy and clean images simultaneously, we (1) generated synthetically hazed images involving varying haze densities and (2) resized to 640p and 1280p resolutions. Next, we used YOLO8 and YOLO10 models to evaluate the OD performance in (i) haze-free ground truth, (ii) three different hazed versions, and (iii) their dehazed counterparts through several metrics. Our experiments showed that the gUNET approach, incorporating a variant of the U-Net model inspired by GCANet and GridDehazeNet outperformed the others in terms of OD performance. Surprisingly, the Dehamer negatively affected the OD performance due to the artifacts it produced. This assessment not only provides valuable findings into the effectiveness of these methods but also sheds light on how to benefit them when it comes to object detection under hazy atmospheric conditions.

Kaynakça

  • [1] Yang Y, Wang C, Liu R, Zhang L, Guo X, Tao D. “Selfaugmented unpaired image dehazing via density and depth decomposition”. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022.
  • [2] Li B, Ren W, Fu D, Tao D, Feng D, Zeng W, Wang Z. “Benchmarking single-image dehazing and beyond”. IEEE Transactions on Image Processing, 28(1), 492-505, 2019.
  • [3] Chahal KS, Dey K. “A survey of modern object detection literature using deep learning”. arXiv, 2018. https://arxiv.org/pdf/1808.07256
  • [4] Medium, “Synthesize Hazy/Foggy Images using Monodepth and Atmospheric Scattering Model”. https://towardsdatascience.com/synthesize-hazy-foggyimage-using-monodepth-and-atmospheric-scatteringmodel-9850c721b74e (08.08.2024).
  • [5] Tran LA, Do TD, Park DC, Le MH. “Robustness enhancement of object detection in advanced driver assistance systems (ADAS)”. https://arxiv.org/pdf/2105.01580. arXiv, 2021.
  • [6] Song Y, He Z, Qian H, Du X. “Vision transformers for single image dehazing”. IEEE Transactions on Image Processing, 32, 1927-1941, 2023.
  • [7] Song Y, Zhou Y, Qian H, Du X. “Rethinking performance gains in image dehazing networks”. arXiv 2022. https://arxiv.org/pdf/2209.11448
  • [8] Thakur N, Nagrath P, Jain R, Saini D, Sharma N, Hemanth J. “Object detection in deep surveillance”. Research Square, 2021. https://doi.org/10.21203/rs.3.rs-901583/v1
  • [9] Ali S, Abdullah Athar A, Ali M, Hussain A, Kim HC. "Computer vision-based military tank recognition using object detection technique: an application of the YOLO framework". 1st International Conference on Advanced Innovations in Smart Cities, Jeddah, Saudi Arabia, 23-25 January 2023.
  • [10] Rahadianti L, Azizah A Y, Deborah H. "Evaluation of the quality indicators in dehazed images: color, contrast, naturalness, and visual pleasingness". Heliyon, 7(9), 1-12, 2021.
  • [11] Wu H, Qu Y, Lin S, Zhou JJ, Qiao R, Zhang Z, Xie Y, Ma L. “Contrastive learning for compact single image dehazing”. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, 19–25 June 2021.
  • [12] Yang Y, Wang C, Liu R, Zhang L, Guo X, Tao D. “Selfaugmented unpaired image dehazing via density and depth decomposition”, IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022.
  • [13] Guo C, Yan Q, Anwar S, Cong R, Ren W, Li C. “Image dehazing transformer with transmission-aware 3D position embedding”. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022.
  • [14] Zheng Y, Zhan J, He S, Dong J, Du Y. “Curricular contrastive regularization for physics-aware single image dehazing”. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, Canada, 18-22 June 2023.
  • [15] He K, Sun J, Tang X. “Single image haze removal using dark channel prior”. 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20-25 June 2009.
  • [16] Berman D, Treibitz T, Avidan S. “Non-local image dehazing”. 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June-1 July 2016.
  • [17] Li B, Peng X, Wang Z, Xu J, Feng D. “AOD-Net: all-in-one dehazing network”. IEEE International Conference on Computer Vision, Venice, Italy, 22-29 October 2017.
  • [18] Ancuti CO, Ancuti C. “Single image dehazing by multi-scale fusion”. IEEE Transactions on Image Processing, 22(8), 3271-3282, 2013.
  • [19] Ultralytics. “VisDrone”. https://docs.ultralytics.com/tr/datasets/detect/visdrone /#citations-and-acknowledgments (08.02.2024).
  • [20] GitHub. “VisDrone/VisDrone-Dataset”. https://github.com/VisDrone/VisDrone-Dataset (08.07.2024).
  • [21] GitHub. “tranleanh/haze-synthesis”. https://github.com/tranleanh/haze-synthesis (09.05.2024).
  • [22] Wang A, Chen H, Liu L, Chen K, Lin Z, Han J, Ding, G. “Yolov10: Real-Time End-To-End Object Detection”. arXiv 2024. https://arxiv.org/pdf/2405.14458
  • [23] Hussain M. “YOLO-v1 to YOLO-v8, the rise of YOLO and its complementary nature toward digital manufacturing and industrial defect detection”. Machines, 11(7), 677, 2023.
  • [24] Roboflow Blog. “Your Comprehensive Guide to the YOLO Family of Models”. https://blog.roboflow.com/guide-toyolo-models/ (08.02.2024).
  • [25] Ghosh A. “YOLOv10: The Dual-Head OG of YOLO Series”. https://learnopencv.com/yolov10/ (01.07.2024).
  • [26] Marium A, Srinivasan D G, Shetty S A. “Literature survey on object detection using YOLO”. International Research Journal of Engineering and Technology, 7(6), 3082-3088, 2020.
  • [27] Jiang P, Ergu D, Liu F, Cai Y, Ma B. “A review of YOLO algorithm developments”. Procedia Computer Science, 199, 1066-1073, 2022.
  • [28] Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu CY, Berg, AC. “SSD: single shot multibox detector”. Computer VisionECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016.
  • [29] Deng C, Wang M, Liu L, Liu Y, Jiang Y. “Extended feature pyramid network for small object detection”. IEEE Transactions on Multimedia, 24, 1968-1979, 2022.
  • [30] Hnewa M, Radha H. “Multiscale domain adaptive YOLO for cross-domain object detection”. 2021 IEEE International Conference on Image Processing, Anchorage, Alaska, USA, 19-22 September 2021.
  • [31] Sirisha U, Praveen SP, Srinivasu PN, Barsocchi P, Bhoi AK. “Statistical analysis of design aspects of various YOLObased deep learning models for object detection”. International Journal of Computational Intelligence Systems, 16(126), 1-29, 2023.
  • [32] GitHub “Li-Chongyi/Dehamer”. https://github.com/LiChongyi/Dehamer (08.02.2024).
  • [33] Wu B, Xu C, Dai X, Wan A, Zhang P, Yan Z, Tomizuka M, Gonzalez J, Keutzer K, Vajda P. “Visual transformers: tokenbased image representation and processing for computer vision”. arXiv 2020. https://arxiv.org/pdf/2006.03677
  • [34] Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I. “Attention is all you need”. arXiv, 2017. https://arxiv.org/pdf/1706.03762
  • [35] Dosovitskiy A. “An image is worth 16x16 words: transformers for image recognition at scale”. arXiv, 2020. https://arxiv.org/pdf/2010.11929
  • [36] GitHub. “IDKiro/gUNet”. https://github.com/IDKiro/gUNet (08.02.2024).
  • [37] Shah T. “Measuring object detection models - mAP - what is mean average precision?”. https://tarangshah.com/blog/2018-01-27/what-is-mapunderstanding-the-statistic-of-choice-for-comparingobject-detection-models/ (08.02.2024).
  • [38] LearnOpenCV. “Mean average precision (mAP) in object detection”. https://learnopencv.com/mean-averageprecision-map-object-detection-model-evaluationmetric/ (08.02.2024).
  • [39] Altun M, Türker M. “Vehicle detection in urban areas from very high resolution UAV color images”. Pamukkale University Journal of Engineering Sciences, 26(2), 371-384, 2020.
Toplam 39 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Kuantum Mühendislik Sistemleri (Bilgisayar ve İletişim Dahil)
Bölüm Makale
Yazarlar

Ahmet Selman Bozkır

Nurçiçek Özenç

Yayımlanma Tarihi 30 Haziran 2025
Gönderilme Tarihi 24 Şubat 2024
Kabul Tarihi 20 Ağustos 2024
Yayımlandığı Sayı Yıl 2025 Cilt: 31 Sayı: 3

Kaynak Göster

APA Bozkır, A. S., & Özenç, N. (2025). Not all fog removers are equal: Unmasking the impact of dehazing on object detection. Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi, 31(3), 373-383.
AMA Bozkır AS, Özenç N. Not all fog removers are equal: Unmasking the impact of dehazing on object detection. Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi. Haziran 2025;31(3):373-383.
Chicago Bozkır, Ahmet Selman, ve Nurçiçek Özenç. “Not All Fog Removers Are Equal: Unmasking the Impact of Dehazing on Object Detection”. Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi 31, sy. 3 (Haziran 2025): 373-83.
EndNote Bozkır AS, Özenç N (01 Haziran 2025) Not all fog removers are equal: Unmasking the impact of dehazing on object detection. Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi 31 3 373–383.
IEEE A. S. Bozkır ve N. Özenç, “Not all fog removers are equal: Unmasking the impact of dehazing on object detection”, Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi, c. 31, sy. 3, ss. 373–383, 2025.
ISNAD Bozkır, Ahmet Selman - Özenç, Nurçiçek. “Not All Fog Removers Are Equal: Unmasking the Impact of Dehazing on Object Detection”. Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi 31/3 (Haziran 2025), 373-383.
JAMA Bozkır AS, Özenç N. Not all fog removers are equal: Unmasking the impact of dehazing on object detection. Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi. 2025;31:373–383.
MLA Bozkır, Ahmet Selman ve Nurçiçek Özenç. “Not All Fog Removers Are Equal: Unmasking the Impact of Dehazing on Object Detection”. Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi, c. 31, sy. 3, 2025, ss. 373-8.
Vancouver Bozkır AS, Özenç N. Not all fog removers are equal: Unmasking the impact of dehazing on object detection. Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi. 2025;31(3):373-8.





Creative Commons Lisansı
Bu dergi Creative Commons Al 4.0 Uluslararası Lisansı ile lisanslanmıştır.