Araştırma Makalesi
BibTex RIS Kaynak Göster

Artificial Intelligence as a Partner in Ankylosing Spondylitis Care: Evaluating ChatGPT’s Role and Performance

Yıl 2025, Cilt: 7 Sayı: 2, 339 - 345, 09.05.2025
https://doi.org/10.37990/medr.1622314

Öz

Aim: Artificial Intelligence may have significant potential to assist clinicians in decision-making and diagnosis, especially in units dependent on up-to-date guidelines such as rheumatology. This study aims to evaluate the effectiveness of ChatGPT in providing clinicians with evidence-based information about ankylosing spondylitis (AS).
Material and Method: Frequently asked questions (FAQs) about AS were developed by reviewing commonly accessed patient-oriented websites, social media platforms, and official hospital pages. Questions were designed based on scientific guidelines, particularly the American College of Rheumatology (ACR) and Assessment of SpondyloArthritis international Society (ASAS)-European League Against Rheumatism (EULAR) axial spondyloarthritis guidelines. ChatGPT's responses were evaluated on a 1-to-4 scale. Each question was posed twice to assess reproducibility, with consistency defined by identical scores across both attempts.
Results: ChatGPT demonstrated an overall accuracy of 81.9% in its responses to 72 FAQs. The highest accuracy (91.7%) was observed in responses related to the prevention of AS. Of the 36 questions based on ACR and ASAS-EULAR guidelines, ChatGPT provided accurate answers for 22 (61.1%), with three responses receiving the lowest grade (4). Reproducibility of ChatGPT's responses was 88.8% across all FAQs and 83.3% for guideline-specific questions.
Conclusion: This study highlights the potential of ChatGPT as a supportive tool for patient education and clinician reference, particularly for general FAQs. However, accuracy for questions derived from ACR and ASAS-EULAR guidelines was lower (61.1%), emphasizing the need for clinician oversight.

Etik Beyan

Ethics committee approval was not required as patient data was not used in the present study.

Kaynakça

  • Sieper J, Poddubnyy D. Axial spondyloarthritis. Lancet. 2017;390:73-84.
  • Navarro-Compán V, Sepriano A, El-Zorkany B, van der Heijde D. Axial spondyloarthritis. Ann Rheum Dis. 2021;80:1511–21.
  • Zhao SS, Robertson S, Reich T, et al. Prevalence and impact of comorbidities in axial spondyloarthritis: systematic review and meta-analysis. Rheumatology (Oxford). 2020;59:iv47–57.
  • López-Medina C, Molto A. Comorbidity management in spondyloarthritis. RMD Open. 2020;6:e001135.
  • Akkoc N. Are spondyloarthropathies as common as rheumatoid arthritis worldwide? A review. Curr Rheumatol Rep. 2008;10:371-8.
  • Yuksel B, Cakmak K. Healthcare information on YouTube: pregnancy and COVID-19. Int J Gynaecol Obstet. 2020;150:189-93.
  • Chandwar K, Prasanna Misra D. What does artificial intelligence mean in rheumatology?. Arch Rheumatol. 2024;39:1-9.
  • Zhou Z. Evaluation of ChatGPT's capabilities in medical report generation. Cureus. 2023;15:e37589.
  • Caglar U, Yildiz O, Meric A, et al. Evaluating the performance of ChatGPT in answering questions related to pediatric urology. J Pediatr Urol. 2024;20:26.e1-5.
  • Cinar C. Analyzing the performance of ChatGPT about osteoporosis. Cureus. 2023;15:e45890.
  • Tulandi T. Disclosure of artificial intelligence/ChatGPT-generated manuscripts. J Obstet Gynaecol Can. 2023;45:543-4.
  • Gilson A, Safranek CW, Huang T, et al. How does ChatGPT perform on the United States Medical Licensing Examination (USMLE)? The implications of large language models for medical education and knowledge assessment. JMIR Med Educ. 2023;9:e45312. Erratum in: JMIR Med Educ. 2024;10:e57594.
  • Ong CWM, Blackbourn HD, Migliori GB. GPT-4, artificial intelligence and implications for publishing. Int J Tuberc Lung Dis. 2023;27:425-6.
  • Alsyouf M, Stokes P, Hur D, et al. 'Fake News' in urology: evaluating the accuracy of articles shared on social media in genitourinary malignancies. BJU Int. 2019;124:701-6.
  • Yeo YH, Samaan JS, Ng WH, et al. Assessing the performance of ChatGPT in answering questions regarding cirrhosis and hepatocellular carcinoma. Clin Mol Hepatol. 2023;29:721–32.
  • Morath B, Chiriac U, Jaszkowski E, et al. Performance and risks of ChatGPT used in drug information: an exploratory real-world analysis. Eur J Hosp Pharm. 2024;31:491-7.
  • Ozgor BY, Simavi MA. Accuracy and reproducibility of ChatGPT's free version answers about endometriosis. Int J Gynaecol Obstet. 2024;165:691-5.
  • Dyckhoff-Shen S, Koedel U, Brouwer MC, et al. ChatGPT fails challenging the recent ESCMID brain abscess guideline. J Neurol. 2024;271:2086-101.
  • Zhou Z, Wang X, Li X, Liao L. Is ChatGPT an evidence-based doctor?. Eur Urol. 2023;84:355-6.
  • Antaki F, Touma S, Milad D, et al. Evaluating the performance of ChatGPT in ophthalmology: An analysis of its successes and shortcomings. Ophthalmol Sci. 2023;3:100324.
  • Kung TH, Cheatham M, Medenilla A, et al. Performance of ChatGPT on USMLE: potential for AI-assisted medical education using large language models. PLOS Digit Health. 2023;2:e0000198.
  • Tsoutsanis P, Tsoutsanis A. Evaluation of large language model performance on the Multi-Specialty Recruitment Assessment (MSRA) exam. Comput Biol Med. 2024;168:107794.
  • Panda SC. Medicine: science or art?. Mens Sana Monogr. 2006;4:127-38.
Yıl 2025, Cilt: 7 Sayı: 2, 339 - 345, 09.05.2025
https://doi.org/10.37990/medr.1622314

Öz

Kaynakça

  • Sieper J, Poddubnyy D. Axial spondyloarthritis. Lancet. 2017;390:73-84.
  • Navarro-Compán V, Sepriano A, El-Zorkany B, van der Heijde D. Axial spondyloarthritis. Ann Rheum Dis. 2021;80:1511–21.
  • Zhao SS, Robertson S, Reich T, et al. Prevalence and impact of comorbidities in axial spondyloarthritis: systematic review and meta-analysis. Rheumatology (Oxford). 2020;59:iv47–57.
  • López-Medina C, Molto A. Comorbidity management in spondyloarthritis. RMD Open. 2020;6:e001135.
  • Akkoc N. Are spondyloarthropathies as common as rheumatoid arthritis worldwide? A review. Curr Rheumatol Rep. 2008;10:371-8.
  • Yuksel B, Cakmak K. Healthcare information on YouTube: pregnancy and COVID-19. Int J Gynaecol Obstet. 2020;150:189-93.
  • Chandwar K, Prasanna Misra D. What does artificial intelligence mean in rheumatology?. Arch Rheumatol. 2024;39:1-9.
  • Zhou Z. Evaluation of ChatGPT's capabilities in medical report generation. Cureus. 2023;15:e37589.
  • Caglar U, Yildiz O, Meric A, et al. Evaluating the performance of ChatGPT in answering questions related to pediatric urology. J Pediatr Urol. 2024;20:26.e1-5.
  • Cinar C. Analyzing the performance of ChatGPT about osteoporosis. Cureus. 2023;15:e45890.
  • Tulandi T. Disclosure of artificial intelligence/ChatGPT-generated manuscripts. J Obstet Gynaecol Can. 2023;45:543-4.
  • Gilson A, Safranek CW, Huang T, et al. How does ChatGPT perform on the United States Medical Licensing Examination (USMLE)? The implications of large language models for medical education and knowledge assessment. JMIR Med Educ. 2023;9:e45312. Erratum in: JMIR Med Educ. 2024;10:e57594.
  • Ong CWM, Blackbourn HD, Migliori GB. GPT-4, artificial intelligence and implications for publishing. Int J Tuberc Lung Dis. 2023;27:425-6.
  • Alsyouf M, Stokes P, Hur D, et al. 'Fake News' in urology: evaluating the accuracy of articles shared on social media in genitourinary malignancies. BJU Int. 2019;124:701-6.
  • Yeo YH, Samaan JS, Ng WH, et al. Assessing the performance of ChatGPT in answering questions regarding cirrhosis and hepatocellular carcinoma. Clin Mol Hepatol. 2023;29:721–32.
  • Morath B, Chiriac U, Jaszkowski E, et al. Performance and risks of ChatGPT used in drug information: an exploratory real-world analysis. Eur J Hosp Pharm. 2024;31:491-7.
  • Ozgor BY, Simavi MA. Accuracy and reproducibility of ChatGPT's free version answers about endometriosis. Int J Gynaecol Obstet. 2024;165:691-5.
  • Dyckhoff-Shen S, Koedel U, Brouwer MC, et al. ChatGPT fails challenging the recent ESCMID brain abscess guideline. J Neurol. 2024;271:2086-101.
  • Zhou Z, Wang X, Li X, Liao L. Is ChatGPT an evidence-based doctor?. Eur Urol. 2023;84:355-6.
  • Antaki F, Touma S, Milad D, et al. Evaluating the performance of ChatGPT in ophthalmology: An analysis of its successes and shortcomings. Ophthalmol Sci. 2023;3:100324.
  • Kung TH, Cheatham M, Medenilla A, et al. Performance of ChatGPT on USMLE: potential for AI-assisted medical education using large language models. PLOS Digit Health. 2023;2:e0000198.
  • Tsoutsanis P, Tsoutsanis A. Evaluation of large language model performance on the Multi-Specialty Recruitment Assessment (MSRA) exam. Comput Biol Med. 2024;168:107794.
  • Panda SC. Medicine: science or art?. Mens Sana Monogr. 2006;4:127-38.
Toplam 23 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Romatoloji ve Artrit, Fiziksel Tıp ve Rehabilitasyon
Bölüm Özgün Makaleler
Yazarlar

Ömer Faruk Bucak 0000-0003-3995-8120

Cigdem Cinar 0000-0001-9159-6345

Yayımlanma Tarihi 9 Mayıs 2025
Gönderilme Tarihi 17 Ocak 2025
Kabul Tarihi 26 Şubat 2025
Yayımlandığı Sayı Yıl 2025 Cilt: 7 Sayı: 2

Kaynak Göster

AMA Bucak ÖF, Cinar C. Artificial Intelligence as a Partner in Ankylosing Spondylitis Care: Evaluating ChatGPT’s Role and Performance. Med Records. Mayıs 2025;7(2):339-345. doi:10.37990/medr.1622314

17741

Chief Editors

Assoc. Prof. Zülal Öner
İzmir Bakırçay University, Department of Anatomy, İzmir, Türkiye

Assoc. Prof. Deniz Şenol
Düzce University, Department of Anatomy, Düzce, Türkiye

Editors
Assoc. Prof. Serkan Öner
İzmir Bakırçay University, Department of Radiology, İzmir, Türkiye
 
E-mail: medrecsjournal@gmail.com

Publisher:
Medical Records Association (Tıbbi Kayıtlar Derneği)
Address: Orhangazi Neighborhood, 440th Street,
Green Life Complex, Block B, Floor 3, No. 69
Düzce, Türkiye
Web: www.tibbikayitlar.org.tr

Publication Support:
Effect Publishing & Agency
Phone: + 90 (553) 610 67 80
E-mail: info@effectpublishing.com
Address: Şehit Kubilay Neighborhood, 1690 Street,
No:13/22, Ankara, Türkiye
web: www.effectpublishing.com