Research Article
BibTex RIS Cite

“IT’S MY VOICE, BUT I’M NOT THE ONE SPEAKING”: VOICE CLONING IN THE AGE OF ARTIFICIAL INTELLIGENCE WITHIN THE CONTEXT OF ETHICAL AND LEGAL DEBATES

Year 2025, Volume: 34 Issue: Uygarlığın Dönüşümü - Sosyal Bilimlerin Bakışıyla Yapay Zekâ, 322 - 340, 20.07.2025

Abstract

Voice cloning, defined as the process of copying, processing, and transforming an individual's voice into synthetic speech through artificial intelligence technologies, offers individuals who are at risk of losing their voice due to illness or disability the opportunity to continue using their own voice. It also enables the voices of deceased artists to be heard again and has gained widespread use across various fields, from marketing to entertainment. However, the unauthorized copying and imitation of voices raises serious concerns regarding copyright infringement, concerning artists, an increase in audio deepfakes, rising fraud incidents, and the proliferation of manipulative content that can influence public preferences, particularly during critical periods such as elections. This study explores the transformation of voice from a unique biometric system into synthetic data, highlighting the ethical and legal gaps surrounding the technology. This study employs the qualitative document analysis technique to examine articles, national and international legislative texts, and industry reports published between 2018 and 2025 using a descriptive approach. The evaluation, conducted under the categories of “violations of personality rights,” “copyright infringements,” and “moral rights,” as well as “production of fake and manipulative content,” reveals that ethical and legal regulations concerning voice cloning lag behind the technological developments. It is concluded that the absence of comprehensive legal regulations and widely accepted ethical principles regarding this technology, which is highly susceptible to unethical manipulations, represents a significant gap.

References

  • About VocaliD. (2025). We are Vocalid, your voice AI company. https://vocalid.ai/about-us/
  • Akhtar, Z., Pendyala, T. L., ve Athmakuri, V. S. (2024). Video and audio deepfake datasets and open ıssues in deepfake technology: being ahead of the curve. Forensic Sciences, 4(3), 289-377. https://doi.org/10.3390/forensicsci4030021
  • Almutairi, Z., ve Elgibreen, H. (2022). A review of modern audio deepfake detection methods: Challenges and future directions. Algorithms, 15(155). https://doi.org/10.3390/a15050155
  • Alpyıldız, E. (2024). Yapay zekâ, posthuman folklor ve müzik teknolojisine bir bakış: Atatürk’ün sesinden türküler dinlediniz. Folklor Akademi Dergisi, 7(1), 388–402. https://doi.org/10.55666/folklor.1423890
  • Amate, S., ve Sarnaik, A. (2024). Detecting voice and video clones through deep learning and artificial intelligence: A study on the effectiveness of techniques against deep fakes International Journal of Advances in Engineering and Management (IJAEM), 6(6), 568–574.
  • Amazaga, N. ve Hajek, j. (2022). Availability of voice deepfake technology and its ımpact for good and evil. Proceedings of the 23rd Annual Conference on Information Technology Education, USA, 23-28. https://doi.org/10.1145/3537674.3554742
  • Amjad Hassan Khan, M. K., ve Aithal, P. S. (2022). Voice biometric systems for user identification and authentication: A literature review. International Journal of Applied Engineering and Management Letters (IJAEML), 6(1), 198–209. https://doi.org/10.5281/zenodo.6471040
  • Altuncu, E., Franqueria, V. N. L. ve Li, S. (2022). Deepfake: definitions, performance metrics and standards, datasets and benchmarks, and a meta-review. https://arxiv.org/pdf/2208.10913
  • Audio Trends Report. (2024). Audio experiences report by Voices. https://static.voices.com/wp-mainsite/uploads/20231205110639/2024Voices-AudioTrendsReport.pdf
  • Baris, A. (2024). AI covers: Legal notes on audio mining and voice cloning. Journal of Intellectual Property Law & Practice, 19(7), 571–576. https://doi.org/10.1093/jiplp/jpae029
  • Belada, N. E. S. (2024). Deepfake dezenformasyonu. Bilişim Hukuku Dergisi. 6(1), 321-358.
  • Bennie, P. (2025). Professor Hawking's voice. https://www.chiark.greenend.org.uk/~peterb/hawking/
  • Berk, M. E. (2020). Dijital çağın yeni tehlikesi “deepfake”. OPUS, 16(28), 1508-1523. https://doi.org/10.26466/opus.683819
  • Bizz, M. (2024, Ekim 18). 10 AI scams you should watch out for: Stay smart, stay safe. Techpilot. https://techpilot.ai/10-ai-scams-you-should-watch-out-for/
  • Boles, A., ve Rad, P. (2017). Voice biometrics: Deep learning-based voiceprint authentication system. 12th System of Systems Engineering Conference, USA, 1-6. https://doi.org/10.1109/SYSOSE.2017.7994971
  • Brattberg, E., Csernatoni, R., ve Rugova, V. (2020). Europe and AI: Leading, lagging behind, or carving its own way? Working Paper, Carnegie Endowment for International Peace. https://carnegie-production-assets.s3.amazonaws.com/static/files/BrattbergCsernatoniRugova_-_Europe_AI.pdf
  • Brown, S. (2021, Nisan 21). Machine learning, explained. https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained
  • Budnik R. A. ve Evpak E. G. (2024) Human Voice: Legal Protection Challenges. Legal Issues in the Digital Age, 5(4), 28–45. https://doi.org/10.17323/2713-2749.2024.4.28.45
  • Caballar, R. D. (2024). New techniques emerge to stop audio deepfakes: A recent FTC challenge crowned three ways to thwart nefarious voice clones. https://spectrum.ieee.org/deepfake-audio
  • Calzade, D. H. (2024, 9 Ekim). The dangers of voice cloning and how to combat it. The Conversation. https://theconversation.com/the-dangers-of-voice-cloning-and-how-to-combat-it-239926
  • Cavarero, A. (2005). For more than one voice: Toward a philosophy of vocal expression. Stanford University Press.
  • Chen, X., Li, Z., Setlur, S. ve Xu, W. (2022). Exploring racial and gender disparities in voice biometrics. Scientific Reports, 12, 3723. https://doi.org/10.1038/s41598-022-06673-y
  • Czyzewski, A. (2024). Enhancing voice biometric security: Evaluating neural network and human capabilities in detecting cloned voices. The Journal of the Acoustical Society of America, 155(3). https://doi.org/10.1121/2.0001978
  • Daniel, J. (2024, Kasım 23). AI voice cloning: The future of speaking without saying a word. Techpilot. https://techpilot.ai/ai-voice-cloning-and-the-tech-behind/
  • Davies, J. Q. (2019). "I am an essentialist": Against the voice itself. M. Feldman ve J. T. Zeitlin (Eds.), The voice as something more (ss. 142-171). The University of Chicago Press.
  • Devine, C., O'Sullivan, D., ve Lyngaas, S. (2024, Şubat 1). A fake recording of a candidate saying he'd rigged the election went viral. Experts say it's only the beginning. https://edition.cnn.com/2024/02/01/politics/election-deepfake-threats-invs/index.html
  • Dolar, M. (2012). The linguistics of the voice. J. Sterne (Ed.), The Sound Studies Reader (ss. 539-555). Routledge.
  • Dost, S. (2023a). Yapay zekâ ve uluslararası hukukun geleceği. Süleyman Demirel Üniversitesi Hukuk Fakültesi Dergisi 13(2), 1271-1313.
  • Dost, S. (2023b). Yapay zekâ ve ifade özgürlüğü. DÜHFD, 28(49), 279-318.
  • Ethics at Resemble AI. (2025). https://www.resemble.ai/ethics/
  • Elevenlabs. (2025). Create a replica of your voice that sounds just like you. https://elevenlabs.io/voice-cloning
  • Federal Trade Commission. (2023, 16 Kasım). Preventing the harms of AI-enabled voice cloning. https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/11/preventing-harms-ai-enabled-voice-cloning#ftn1
  • Feher, K. (2025). Generative AI, media and society. Routledge.
  • Gedye, G. (2025, 10 Mart). AI voice cloning: do these 6 companies do enough to prevent misuse? https://innovation.consumerreports.org/AI-Voice-Cloning-Report-.pdf
  • Gordon, R. (2024). 3 questions: What you need to know about audio deep fakes. https://news.mit.edu/2024/what-you-need-to-know-audio-deepfakes-0315
  • Gowda, H. B., Ramakumar, K. D., Sheetal, V., Sushma, M., ve Madhusudhan, G. K. (2023). Real-time voice cloning using deep learning: A case study. International Journal of Creative Research Thoughts, 11(5), 171-175. https://ijcrt.org/papers/IJCRT2305749.pdf
  • Gök, B. (2024). Yapay zekâ sistemleri tarafından üretilen fikri ürünlerin telif hakkı. Platon Plus Yayıncılık.
  • Grand View Research. (2022). AI voice generators market size, share & trends analysis report by offering (software, services), by application (audio & speech generation, voice cloning & conversion), by end-use, by region, and segment forecasts, 2024-2030. https://www.grandviewresearch.com/industry-analysis/ai-voice-generators-market-report
  • Grodal, S., Ha, J., Hood, E. ve Rajunov, M. (2024). Between humans and machines: The social construction of the generative AI category. Organization Theory, 5(3). https://doi.org/10.1177/26317877241275125
  • Gunning, T. (2019). A voice that is not mine: Terror and the mythology of the technological voice. M. Feldman ve J. T. Zeitlin (Ed.), The voice as something more (ss. 325-339). The University of Chicago Press.
  • Gutierrez, C. I., Aguirre, A., Uuk, R., Boine, C. C. ve Franklin, M. A. (2022). Proposal for a definition of general purpose artificial intelligence systems. Digital Society, 3. http://dx.doi.org/10.2139/ssrn.4238951
  • Güvenilir Yapay Zekâ Sistemleri için Etik İlkeler Rehberi (2019). European Commission. https://www.europarl.europa.eu/cmsdata/196377/AI%20HLEG_Ethics%20Guidelines%20for%20Trustworthy%20AI.pdf
  • Håkansson, A. ve Phillips-Wren, G. (2024). Generative AI and large language models - Benefits, drawbacks, future and recommendations. Procedia Computer Science, 246, 5458-5468. https://doi.org/10.1016/j.procs.2024.09.689
  • Hoffmann, M. ve Nurski, L. (2021). What is holding back artificial intelligence adoption in Europe? Bruegel Policy Contribution, No. 24/2021. Bruegel. https://www.bruegel.org/sites/default/files/private/wp_attachments/PC-24-261121.pdf
  • How ElevenLabs is preparing for elections in 2024. (2024, 17 Şubat). https://elevenlabs.io/blog/how-elevenlabs-is-preparing-for-elections-in-2024
  • Hu, W. ve Zhu, X. (2023). A real-time voice cloning system with multiple algorithms for speech quality improvement. PLOS ONE, 18(4). https://doi.org/10.1371/journal.pone.0283440
  • Huijstee, M. V., Boheemen, P. V., Das, D., Nierling, L., Jahnel, J., Karaboga, M. ve Fatun, M. (2021). Tackling deepfakes in European policy. European Parliamentary Research Service. https://www.europarl.europa.eu/thinktank/en/document/EPRS_STU(2021)690039
  • Hussein, K., ve Özad, B. (2025). Ai-driven media manipulation: public awareness, trust, and the role of detection frameworks in addressing deepfake technologies. İnterdisipliner Medya ve İletişim Çalışmaları, 2(3), 98-133.
  • Ihde, D. (2007). Listening and voice: Phenomenologies of sound (2. baskı). State University of New York Press.
  • Ji, J. (2023). Copyrights infringement risks in AI-generated cover songs: An analysis based on current legislation. Lecture Notes in Education Psychology and Public Media, 20, 19-25. https://doi.org/10.54254/2753-7048/20/20231467
  • Johnson, B. (2017). Sound and voice. J. Damousi ve P. Hamilton (Ed.), A cultural history of sound, memory and senses. Routledge.
  • Jurcys, P., Fenwick, M. ve Liaudanskas, A. (2024). Voice cloning in an age of generative AI: Mapping the limits of the law & principles for a new social contract with technology. SSRN. https://ssrn.com/abstract=4850866
  • Kara Kılıçarslan, S. (2019). Yapay zekanın hukuki statüsü ve hukuki kişiliği üzerine tartışmalar. YBHD. 4(2), 363-389.
  • Karpf, A. (2006). The human voice. Bloomsbury.
  • Kerry, C. (2020). Protecting privacy in an AI-driven world. The Brookings Institution. https://www.brookings.edu/research/protecting-privacy-in-an-ai-drivenworld
  • Keskin, A. (2024, 15-17 Mayıs). Duygu, nostalji ve sosyoloji ilişkisinde yapay zekâ: Zeki Müren-parla (ai cover) marşının analizi. 11. Uluslararası İletişim Günleri Dijital Eşitsizlik & Veri Sömürgeciliği Sempozyumu, İstanbul, https://ifig.uskudar.edu.tr/uploads/content/files/ifig2024-bildiri-ozetleri.pdf
  • Khanjani, Z., Watson, G. ve Janeja, V. P. (2023). Audio deepfakes: A survey. Frontiers in Big Data, 5, 1-24. https://doi.org/10.3389/fdata.2022.1001063
  • Korada, L (2024). Data poisoning what is it and how it is being addressed by the leading Gen AI providers? European Journal of Advances in Engineering and Technology, 11(5), 105-109. https://doi.org/10.5281/zenodo.13318796
  • Kowalczuk, I. (2025, 21 Şubat). On track to help 1 million people regain their voice. ElevenLabs. https://elevenlabs.io/blog/impact-program-v2
  • Köçeri, K. (2023). Yapay zekânın siyasi, etik ve toplumsal açıdan dezenformasyon tehdidi. İletişim ve Diplomasi (11), 247-266. https://doi.org/10.54722/iletisimvediplomasi.1358267
  • Kuligowska, K., Kisielewicz, P., ve Włodarz, A. (2018). Speech synthesis systems: disadvantages and limitations. International Journal of Engineering and Technology, 7, 234. https://doi.org/10.14419/ijet.v7i2.28.12933
  • Living in a brave new AI era. (2023). Nature Human Behaviour, 7(11), 1799. https://doi.org/10.1038/s41562-023-01775-7
  • Madianou, M. (2021). Nonhuman humanitarianism: when “AI for good” can be harmful. Information, Communication & Society, 24(6), 850–868. https://doi.org/10.1080/1369118X.2021.1909100
  • Magramo, K. (2024, 17 Mayıs). British engineering giant Arup revealed as $25 million deepfake scam victim. CNN. https://edition.cnn.com/2024/05/16/tech/arup-deepfake-scam-loss-hong-kong-intl-hnk/index.html
  • Martin, E. J. (2022, 7 Nisan). Deepfakes: The latest trick of the tongue. Speech Technology Magazine. https://www.speechtechmag.com/Articles/ReadArticle.aspx?ArticleID=152290
  • Martin, M. ve Manuel, O. (2025, 4 Mart). Why an acting voice coach isn't angry about Adrien Brody's AI-assisted Oscar win. https://www.speechtechmag.com/Articles/ReadArticle.aspx?ArticleID=152290
  • MESAM, MSG ve MÜYAP. Müzik sektörünün köklü sorunlarının çözümü konusunda ortak hareket etme kararı aldı. (2024, 8 Ağustos). MESAM. https://mesam.org.tr/duyurular/mesam-msg-ve-muyap-muzik-sektorunun-koklu-sorunlarinin-cozumu-konusunda-ortak-hareket-etme-karari-aldi
  • Milewski, K., Zaporowski, S., ve Czyżewski, A. (2023). Comparison of the ability of neural network model and humans to detect a cloned voice. Electronics, 12(21), 4458. https://doi.org/10.3390/electronics12214458
  • Mitchell, M. (2021). Why AI is harder than we think. Proceedings of the Genetic and Evolutionary Computation Conference, USA. https://doi.org/10.1145/3449639.3465421
  • Morrison, R. (2025). Dali’s voice: How AI is bringing the surrealist back to life. ElevenLabs. https://elevenlabs.io/blog/dalis-voice-how-ai-is-bringing-the-surrealist-back-to-life
  • Munn, L. (2023). The uselessness of AI ethics. AI and Ethics, 3, 869–877. https://doi.org/10.1007/s43681-022-00209-w
  • Napolitano, D. (2020). Voice cloning: ethical considerations. Proceedings of the Eighth Conference on Computation, Communication, Aesthetics & X, Porto, 59–73. https://2020.xcoax.org/xCoAx2020.pdf
  • Neekhara, P., Hussain, S., Dubnov, S., Koushanfar, F., ve McAuley, J. (2021). Expressive neural voice cloning. Proceedings of the 13th Asian Conference on Machine Learning 157, 252–267. https://proceedings.mlr.press/v157/neekhara21a.html
  • OECD. (2024). Recommendation of the Council on Artificial Intelligence. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449
  • Park, M. (2024). What a robocall of Biden's AI-generated voice could mean for the 2024 election. NPR. https://www.npr.org/2024/02/07/1229856682/what-a-robocall-of-bidens-ai-generated-voice-could-mean-for-the-2024-election
  • Pawelec, M. (2024). Decent deepfakes? Professional deepfake developers’ ethical considerations and their governance potential. AI and Ethics, 5, 2641–2666. https://doi.org/10.1007/s43681-024-00542-2
  • Resemble AI. (2025). Protecting against the risks of AI voice cloning. https://www.resemble.ai/dangers-of-ai-voice-cloning-protection/
  • Ramati, I. (2024). Algorithmic ventriloquism: The contested state of voice in AI speech generators. Social Media + Society, 10(1), 1–10. https://doi.org/10.1177/20563051231224401
  • Resemble AI. (2025). Understanding AI voice cloning: What, why, and how. https://www.resemble.ai/understanding-ai-voice-cloning/
  • Respeecher. (2025). About Respeecher. https://www.respeecher.com/about-us
  • Russell, S. (2022). If we succeed. Daedalus, 151(2), 43–57. https://doi.org/10.1162/daed_a_01899
  • Shaaban, O. A., Yıldırım, R., ve Alguttar, A. A. (2023). Audio deepfake approaches. IEEE Access, 11, 132652–132682. https://doi.org/10.1109/access.2023.3333866
  • Schäfer, K., Choi, J. Ve Zmudzinski, S. (2024). Explore the world of audio deepfakes: a guide to detection techniques for non-experts. Proceedings of the 3rd ACM International Workshop on Multimedia AI against Disinformation, 13–22. https://doi.org/10.1145/3643491.3660289
  • Shinha, Y., Hintz, J., ve Siegert, I. (2024). Evaluation of audio deepfakes – Systematic review. Elektronische Sprachsignalverarbeitung 2024, Regensburg, 6-8 Mart 2024. https://doi.org/10.35096/othr/pub-7096
  • Singil, N. (2022). Yapay zekâ ve insan hakları. PPIL. Advanced online publication. https://doi.org/10.26650/ppil.2022.42.1.970856
  • Software Emulation of a Hardware Voice Synthesiser. (2017). Hawkings Voice. https://hawkingsvoice.com/
  • Stroebel, L., Llewellyn, M., Hartley, T., Ip, T. S., ve Ahmed, M. (2023). A systematic literature review on the effectiveness of deepfake detection techniques. Journal of Cyber Security Technology, 7(2), 83–113. https://doi.org/10.1080/23742917.2023.2192888
  • Sun, C., Jia, S., Hou, S., ve Lyu, S. (2023). AI-synthesized voice detection using neural vocoder artifacts. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 904–912. https://doi.org/10.1109/CVPRW59228.2023.00097
  • Swenson, A., ve Weissert, W. (2024, 23 Ocak). New Hampshire investigating fake Biden robocall meant to discourage voters ahead of primary. AP News. https://apnews.com/article/new-hampshire-primary-biden-ai-deepfake-robocall-f3469ceb6dd613079092287994663db5
  • Tahaoğlu, G., Kılıç, M. Üstübioğlu, B. Ve Ulutaş, G. (2024). Derin sahte ses manipülasyonu tespit sistemleri üzerine bir derleme. Yüzüncü Yıl Üniversitesi Fen Bilimleri Enstitüsü Dergisi. 29(1), 353-402. https://doi.org/10.53433/yyufbed.1358880
  • Tenbarge, K. (2024, 21 Mayıs). Scarlett Johansson says she was 'shocked, angered' when she heard OpenAI's voice that sounded like her. NBC News. https://www.nbcnews.com/tech/tech-news/scarlett-johansson-shocked-angered-openai-voice-rcna153180
  • Thomas, S. (2024). AI and actors: Ethical challenges, cultural narratives and industry pathways in synthetic media performance. Emerging Media, 2(3), 523–546. https://doi.org/10.1177/27523543241289108
  • Toktay, Y. ve Güven, A. (2025). Dezenformasyonla mücadele blok zincir (blockchain) teknolojisi. Etkileşim. 15, 100-122. https://doi.org/10.32739/etkilesim.2025.8.15.285
  • Ulusal Yapay Zekâ Stratejisi 2021-2025 (2021). Cumhurbaşkanlığı Dijital Dönüşüm Ofisi Başkanlığı ile Sanayi ve Teknoloji Bakanlığı https://cbddo.gov.tr/uyzs
  • Ulusoy, H., ve Kaya İlhan, Ç. (2025). Siyasal iletişimde yapay zekâ etkisi ve deepfake (derin sahte) dezenformasyonu: 2024 ABD başkanlık seçimleri örneği. TRT Akademi, 10(23), 42–73. https://doi.org/10.37679/trta.1563828
  • Williams, R. (2024). Voice in the machine: AI voice cloning in film. Art Style | Art & Culture International Magazine, 13, 129–144. https://doi.org/10.5281/zenodo.10443451
  • Westerlund, M. (2019). The emergence of deepfake technology: A review. Technology Innovation Management Review, 9(11), 40–53. https://doi.org/10.22215/timreview/1282
  • Widodo, W., ve Bakir, H. (2024). Legal certainty of limitations on the use of artificial intelligence (AI) voice cloning in songs and music as a form of protection of musicians' copyrights. Proceedings of the 4th International Conference on Law, Social Sciences, Economics, and Education, Endonezya. http://doi.org/10.4108/eai.25-5-2024.2349353
  • Yapay Zekâ Yasası (2024). Eu ai act. European Union. https://artificialintelligenceact.eu/the-act/
  • Yıldırım Köse, M. ve Ayçıl Altınok, G. (2025). Yapay zekâ ve müzik sektörü. https://gun.av.tr/tr/goruslerimiz/makaleler/yapay-zeka-ve-muzik-sektoru-1
  • YZ Konseyi Tavsiyesi (2019). Recommendation of the council on artificial intelligence, OECD. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449
  • Yünlü, S. (2024). Üretici yapay zekâ kaynaklı norm ve kişi bazlı hukuki sorumluluk. Adalet Dergisi (72), 501-542. https://doi.org/10.57083/adaletdergisi.1484067
  • Zhang, B., Cui, H., Nguyen, V., ve Whitty, M. (2025). Audio deepfake detection: what has been achieved and what lies ahead. Sensors, 25(7), 1989. https://doi.org/10.3390/s25071989

“DUYULAN BENİM SESİM AMA KONUŞAN BEN DEĞİLİM”: ETİK VE HUKUKİ TARTIŞMALAR BAĞLAMINDA YAPAY ZEKÂ ÇAĞINDA SES KLONLAMA

Year 2025, Volume: 34 Issue: Uygarlığın Dönüşümü - Sosyal Bilimlerin Bakışıyla Yapay Zekâ, 322 - 340, 20.07.2025

Abstract

Bir kişinin sesinin yapay zekâ teknolojileriyle kopyalanması, işlenmesi ve sentetik sese dönüştürülmesi süreci olarak tanımlanan ses klonlama, hastalık nedeniyle sesini kaybetmek üzere olan kişilere ya da engelli bireylere kendi sesini kullanma imkânı sunmakta, hayatını kaybeden sanatçıların seslerinin yeniden duyulmasını sağlamakta, pazarlamadan eğlence dünyasına kadar pek çok alanda çeşitli kullanım pratikleri ile gündeme gelmektedir. Bununla birlikte sesin kişinin rızası olmadan kopyalanması ve taklit edilmesi, sanatçıların telif haklarının ihlaline, sesli derin sahtecilik örneklerinin çoğalmasına, dolandırıcılık vakalarının artmasına, seçimler gibi kritik dönemlerde kamuoyunun tercihlerini yönlendirebilecek manipülatif içerik üretiminin yaygınlaşmasına neden olmaktadır. Günümüzde kullanımı yaygın hale gelen ses klonlama teknolojisinin ele alındığı bu çalışmada, ilk olarak yapay zekâ öncesinde bireye özgü olan ve kişiliğin temel bir özelliği olarak konumlandırılan sesin biyometrik veriden sentetik sese dönüşüm süreci ele alınmıştır. Bu çalışmada, nitel doküman analizi tekniğiyle 2018-2025 döneminde yayımlanmış makale, ulusal-uluslararası mevzuat metni ve sektör raporları betimleyici bir yaklaşımla incelenmiştir. “Kişilik hakları”, “telif hakları” ve “manevi haklar”a ilişkin ihlaller ile “sahte ve manipülatif içerik üretimi” kategorileri altında yapılan değerlendirmede ses klonlama ile etik ve hukuki düzenlemelerin teknolojinin gerisinde kaldığı görülmüştür. Ahlaki olmayan ses manipülasyonlarına açık olan bu teknolojiye karşı kapsamlı bir yasal düzenlemenin ve üzerinde uzlaşılmış etik ilkelerin bulunmamasının önemli bir boşluk olduğu değerlendirilmiştir.

References

  • About VocaliD. (2025). We are Vocalid, your voice AI company. https://vocalid.ai/about-us/
  • Akhtar, Z., Pendyala, T. L., ve Athmakuri, V. S. (2024). Video and audio deepfake datasets and open ıssues in deepfake technology: being ahead of the curve. Forensic Sciences, 4(3), 289-377. https://doi.org/10.3390/forensicsci4030021
  • Almutairi, Z., ve Elgibreen, H. (2022). A review of modern audio deepfake detection methods: Challenges and future directions. Algorithms, 15(155). https://doi.org/10.3390/a15050155
  • Alpyıldız, E. (2024). Yapay zekâ, posthuman folklor ve müzik teknolojisine bir bakış: Atatürk’ün sesinden türküler dinlediniz. Folklor Akademi Dergisi, 7(1), 388–402. https://doi.org/10.55666/folklor.1423890
  • Amate, S., ve Sarnaik, A. (2024). Detecting voice and video clones through deep learning and artificial intelligence: A study on the effectiveness of techniques against deep fakes International Journal of Advances in Engineering and Management (IJAEM), 6(6), 568–574.
  • Amazaga, N. ve Hajek, j. (2022). Availability of voice deepfake technology and its ımpact for good and evil. Proceedings of the 23rd Annual Conference on Information Technology Education, USA, 23-28. https://doi.org/10.1145/3537674.3554742
  • Amjad Hassan Khan, M. K., ve Aithal, P. S. (2022). Voice biometric systems for user identification and authentication: A literature review. International Journal of Applied Engineering and Management Letters (IJAEML), 6(1), 198–209. https://doi.org/10.5281/zenodo.6471040
  • Altuncu, E., Franqueria, V. N. L. ve Li, S. (2022). Deepfake: definitions, performance metrics and standards, datasets and benchmarks, and a meta-review. https://arxiv.org/pdf/2208.10913
  • Audio Trends Report. (2024). Audio experiences report by Voices. https://static.voices.com/wp-mainsite/uploads/20231205110639/2024Voices-AudioTrendsReport.pdf
  • Baris, A. (2024). AI covers: Legal notes on audio mining and voice cloning. Journal of Intellectual Property Law & Practice, 19(7), 571–576. https://doi.org/10.1093/jiplp/jpae029
  • Belada, N. E. S. (2024). Deepfake dezenformasyonu. Bilişim Hukuku Dergisi. 6(1), 321-358.
  • Bennie, P. (2025). Professor Hawking's voice. https://www.chiark.greenend.org.uk/~peterb/hawking/
  • Berk, M. E. (2020). Dijital çağın yeni tehlikesi “deepfake”. OPUS, 16(28), 1508-1523. https://doi.org/10.26466/opus.683819
  • Bizz, M. (2024, Ekim 18). 10 AI scams you should watch out for: Stay smart, stay safe. Techpilot. https://techpilot.ai/10-ai-scams-you-should-watch-out-for/
  • Boles, A., ve Rad, P. (2017). Voice biometrics: Deep learning-based voiceprint authentication system. 12th System of Systems Engineering Conference, USA, 1-6. https://doi.org/10.1109/SYSOSE.2017.7994971
  • Brattberg, E., Csernatoni, R., ve Rugova, V. (2020). Europe and AI: Leading, lagging behind, or carving its own way? Working Paper, Carnegie Endowment for International Peace. https://carnegie-production-assets.s3.amazonaws.com/static/files/BrattbergCsernatoniRugova_-_Europe_AI.pdf
  • Brown, S. (2021, Nisan 21). Machine learning, explained. https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained
  • Budnik R. A. ve Evpak E. G. (2024) Human Voice: Legal Protection Challenges. Legal Issues in the Digital Age, 5(4), 28–45. https://doi.org/10.17323/2713-2749.2024.4.28.45
  • Caballar, R. D. (2024). New techniques emerge to stop audio deepfakes: A recent FTC challenge crowned three ways to thwart nefarious voice clones. https://spectrum.ieee.org/deepfake-audio
  • Calzade, D. H. (2024, 9 Ekim). The dangers of voice cloning and how to combat it. The Conversation. https://theconversation.com/the-dangers-of-voice-cloning-and-how-to-combat-it-239926
  • Cavarero, A. (2005). For more than one voice: Toward a philosophy of vocal expression. Stanford University Press.
  • Chen, X., Li, Z., Setlur, S. ve Xu, W. (2022). Exploring racial and gender disparities in voice biometrics. Scientific Reports, 12, 3723. https://doi.org/10.1038/s41598-022-06673-y
  • Czyzewski, A. (2024). Enhancing voice biometric security: Evaluating neural network and human capabilities in detecting cloned voices. The Journal of the Acoustical Society of America, 155(3). https://doi.org/10.1121/2.0001978
  • Daniel, J. (2024, Kasım 23). AI voice cloning: The future of speaking without saying a word. Techpilot. https://techpilot.ai/ai-voice-cloning-and-the-tech-behind/
  • Davies, J. Q. (2019). "I am an essentialist": Against the voice itself. M. Feldman ve J. T. Zeitlin (Eds.), The voice as something more (ss. 142-171). The University of Chicago Press.
  • Devine, C., O'Sullivan, D., ve Lyngaas, S. (2024, Şubat 1). A fake recording of a candidate saying he'd rigged the election went viral. Experts say it's only the beginning. https://edition.cnn.com/2024/02/01/politics/election-deepfake-threats-invs/index.html
  • Dolar, M. (2012). The linguistics of the voice. J. Sterne (Ed.), The Sound Studies Reader (ss. 539-555). Routledge.
  • Dost, S. (2023a). Yapay zekâ ve uluslararası hukukun geleceği. Süleyman Demirel Üniversitesi Hukuk Fakültesi Dergisi 13(2), 1271-1313.
  • Dost, S. (2023b). Yapay zekâ ve ifade özgürlüğü. DÜHFD, 28(49), 279-318.
  • Ethics at Resemble AI. (2025). https://www.resemble.ai/ethics/
  • Elevenlabs. (2025). Create a replica of your voice that sounds just like you. https://elevenlabs.io/voice-cloning
  • Federal Trade Commission. (2023, 16 Kasım). Preventing the harms of AI-enabled voice cloning. https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/11/preventing-harms-ai-enabled-voice-cloning#ftn1
  • Feher, K. (2025). Generative AI, media and society. Routledge.
  • Gedye, G. (2025, 10 Mart). AI voice cloning: do these 6 companies do enough to prevent misuse? https://innovation.consumerreports.org/AI-Voice-Cloning-Report-.pdf
  • Gordon, R. (2024). 3 questions: What you need to know about audio deep fakes. https://news.mit.edu/2024/what-you-need-to-know-audio-deepfakes-0315
  • Gowda, H. B., Ramakumar, K. D., Sheetal, V., Sushma, M., ve Madhusudhan, G. K. (2023). Real-time voice cloning using deep learning: A case study. International Journal of Creative Research Thoughts, 11(5), 171-175. https://ijcrt.org/papers/IJCRT2305749.pdf
  • Gök, B. (2024). Yapay zekâ sistemleri tarafından üretilen fikri ürünlerin telif hakkı. Platon Plus Yayıncılık.
  • Grand View Research. (2022). AI voice generators market size, share & trends analysis report by offering (software, services), by application (audio & speech generation, voice cloning & conversion), by end-use, by region, and segment forecasts, 2024-2030. https://www.grandviewresearch.com/industry-analysis/ai-voice-generators-market-report
  • Grodal, S., Ha, J., Hood, E. ve Rajunov, M. (2024). Between humans and machines: The social construction of the generative AI category. Organization Theory, 5(3). https://doi.org/10.1177/26317877241275125
  • Gunning, T. (2019). A voice that is not mine: Terror and the mythology of the technological voice. M. Feldman ve J. T. Zeitlin (Ed.), The voice as something more (ss. 325-339). The University of Chicago Press.
  • Gutierrez, C. I., Aguirre, A., Uuk, R., Boine, C. C. ve Franklin, M. A. (2022). Proposal for a definition of general purpose artificial intelligence systems. Digital Society, 3. http://dx.doi.org/10.2139/ssrn.4238951
  • Güvenilir Yapay Zekâ Sistemleri için Etik İlkeler Rehberi (2019). European Commission. https://www.europarl.europa.eu/cmsdata/196377/AI%20HLEG_Ethics%20Guidelines%20for%20Trustworthy%20AI.pdf
  • Håkansson, A. ve Phillips-Wren, G. (2024). Generative AI and large language models - Benefits, drawbacks, future and recommendations. Procedia Computer Science, 246, 5458-5468. https://doi.org/10.1016/j.procs.2024.09.689
  • Hoffmann, M. ve Nurski, L. (2021). What is holding back artificial intelligence adoption in Europe? Bruegel Policy Contribution, No. 24/2021. Bruegel. https://www.bruegel.org/sites/default/files/private/wp_attachments/PC-24-261121.pdf
  • How ElevenLabs is preparing for elections in 2024. (2024, 17 Şubat). https://elevenlabs.io/blog/how-elevenlabs-is-preparing-for-elections-in-2024
  • Hu, W. ve Zhu, X. (2023). A real-time voice cloning system with multiple algorithms for speech quality improvement. PLOS ONE, 18(4). https://doi.org/10.1371/journal.pone.0283440
  • Huijstee, M. V., Boheemen, P. V., Das, D., Nierling, L., Jahnel, J., Karaboga, M. ve Fatun, M. (2021). Tackling deepfakes in European policy. European Parliamentary Research Service. https://www.europarl.europa.eu/thinktank/en/document/EPRS_STU(2021)690039
  • Hussein, K., ve Özad, B. (2025). Ai-driven media manipulation: public awareness, trust, and the role of detection frameworks in addressing deepfake technologies. İnterdisipliner Medya ve İletişim Çalışmaları, 2(3), 98-133.
  • Ihde, D. (2007). Listening and voice: Phenomenologies of sound (2. baskı). State University of New York Press.
  • Ji, J. (2023). Copyrights infringement risks in AI-generated cover songs: An analysis based on current legislation. Lecture Notes in Education Psychology and Public Media, 20, 19-25. https://doi.org/10.54254/2753-7048/20/20231467
  • Johnson, B. (2017). Sound and voice. J. Damousi ve P. Hamilton (Ed.), A cultural history of sound, memory and senses. Routledge.
  • Jurcys, P., Fenwick, M. ve Liaudanskas, A. (2024). Voice cloning in an age of generative AI: Mapping the limits of the law & principles for a new social contract with technology. SSRN. https://ssrn.com/abstract=4850866
  • Kara Kılıçarslan, S. (2019). Yapay zekanın hukuki statüsü ve hukuki kişiliği üzerine tartışmalar. YBHD. 4(2), 363-389.
  • Karpf, A. (2006). The human voice. Bloomsbury.
  • Kerry, C. (2020). Protecting privacy in an AI-driven world. The Brookings Institution. https://www.brookings.edu/research/protecting-privacy-in-an-ai-drivenworld
  • Keskin, A. (2024, 15-17 Mayıs). Duygu, nostalji ve sosyoloji ilişkisinde yapay zekâ: Zeki Müren-parla (ai cover) marşının analizi. 11. Uluslararası İletişim Günleri Dijital Eşitsizlik & Veri Sömürgeciliği Sempozyumu, İstanbul, https://ifig.uskudar.edu.tr/uploads/content/files/ifig2024-bildiri-ozetleri.pdf
  • Khanjani, Z., Watson, G. ve Janeja, V. P. (2023). Audio deepfakes: A survey. Frontiers in Big Data, 5, 1-24. https://doi.org/10.3389/fdata.2022.1001063
  • Korada, L (2024). Data poisoning what is it and how it is being addressed by the leading Gen AI providers? European Journal of Advances in Engineering and Technology, 11(5), 105-109. https://doi.org/10.5281/zenodo.13318796
  • Kowalczuk, I. (2025, 21 Şubat). On track to help 1 million people regain their voice. ElevenLabs. https://elevenlabs.io/blog/impact-program-v2
  • Köçeri, K. (2023). Yapay zekânın siyasi, etik ve toplumsal açıdan dezenformasyon tehdidi. İletişim ve Diplomasi (11), 247-266. https://doi.org/10.54722/iletisimvediplomasi.1358267
  • Kuligowska, K., Kisielewicz, P., ve Włodarz, A. (2018). Speech synthesis systems: disadvantages and limitations. International Journal of Engineering and Technology, 7, 234. https://doi.org/10.14419/ijet.v7i2.28.12933
  • Living in a brave new AI era. (2023). Nature Human Behaviour, 7(11), 1799. https://doi.org/10.1038/s41562-023-01775-7
  • Madianou, M. (2021). Nonhuman humanitarianism: when “AI for good” can be harmful. Information, Communication & Society, 24(6), 850–868. https://doi.org/10.1080/1369118X.2021.1909100
  • Magramo, K. (2024, 17 Mayıs). British engineering giant Arup revealed as $25 million deepfake scam victim. CNN. https://edition.cnn.com/2024/05/16/tech/arup-deepfake-scam-loss-hong-kong-intl-hnk/index.html
  • Martin, E. J. (2022, 7 Nisan). Deepfakes: The latest trick of the tongue. Speech Technology Magazine. https://www.speechtechmag.com/Articles/ReadArticle.aspx?ArticleID=152290
  • Martin, M. ve Manuel, O. (2025, 4 Mart). Why an acting voice coach isn't angry about Adrien Brody's AI-assisted Oscar win. https://www.speechtechmag.com/Articles/ReadArticle.aspx?ArticleID=152290
  • MESAM, MSG ve MÜYAP. Müzik sektörünün köklü sorunlarının çözümü konusunda ortak hareket etme kararı aldı. (2024, 8 Ağustos). MESAM. https://mesam.org.tr/duyurular/mesam-msg-ve-muyap-muzik-sektorunun-koklu-sorunlarinin-cozumu-konusunda-ortak-hareket-etme-karari-aldi
  • Milewski, K., Zaporowski, S., ve Czyżewski, A. (2023). Comparison of the ability of neural network model and humans to detect a cloned voice. Electronics, 12(21), 4458. https://doi.org/10.3390/electronics12214458
  • Mitchell, M. (2021). Why AI is harder than we think. Proceedings of the Genetic and Evolutionary Computation Conference, USA. https://doi.org/10.1145/3449639.3465421
  • Morrison, R. (2025). Dali’s voice: How AI is bringing the surrealist back to life. ElevenLabs. https://elevenlabs.io/blog/dalis-voice-how-ai-is-bringing-the-surrealist-back-to-life
  • Munn, L. (2023). The uselessness of AI ethics. AI and Ethics, 3, 869–877. https://doi.org/10.1007/s43681-022-00209-w
  • Napolitano, D. (2020). Voice cloning: ethical considerations. Proceedings of the Eighth Conference on Computation, Communication, Aesthetics & X, Porto, 59–73. https://2020.xcoax.org/xCoAx2020.pdf
  • Neekhara, P., Hussain, S., Dubnov, S., Koushanfar, F., ve McAuley, J. (2021). Expressive neural voice cloning. Proceedings of the 13th Asian Conference on Machine Learning 157, 252–267. https://proceedings.mlr.press/v157/neekhara21a.html
  • OECD. (2024). Recommendation of the Council on Artificial Intelligence. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449
  • Park, M. (2024). What a robocall of Biden's AI-generated voice could mean for the 2024 election. NPR. https://www.npr.org/2024/02/07/1229856682/what-a-robocall-of-bidens-ai-generated-voice-could-mean-for-the-2024-election
  • Pawelec, M. (2024). Decent deepfakes? Professional deepfake developers’ ethical considerations and their governance potential. AI and Ethics, 5, 2641–2666. https://doi.org/10.1007/s43681-024-00542-2
  • Resemble AI. (2025). Protecting against the risks of AI voice cloning. https://www.resemble.ai/dangers-of-ai-voice-cloning-protection/
  • Ramati, I. (2024). Algorithmic ventriloquism: The contested state of voice in AI speech generators. Social Media + Society, 10(1), 1–10. https://doi.org/10.1177/20563051231224401
  • Resemble AI. (2025). Understanding AI voice cloning: What, why, and how. https://www.resemble.ai/understanding-ai-voice-cloning/
  • Respeecher. (2025). About Respeecher. https://www.respeecher.com/about-us
  • Russell, S. (2022). If we succeed. Daedalus, 151(2), 43–57. https://doi.org/10.1162/daed_a_01899
  • Shaaban, O. A., Yıldırım, R., ve Alguttar, A. A. (2023). Audio deepfake approaches. IEEE Access, 11, 132652–132682. https://doi.org/10.1109/access.2023.3333866
  • Schäfer, K., Choi, J. Ve Zmudzinski, S. (2024). Explore the world of audio deepfakes: a guide to detection techniques for non-experts. Proceedings of the 3rd ACM International Workshop on Multimedia AI against Disinformation, 13–22. https://doi.org/10.1145/3643491.3660289
  • Shinha, Y., Hintz, J., ve Siegert, I. (2024). Evaluation of audio deepfakes – Systematic review. Elektronische Sprachsignalverarbeitung 2024, Regensburg, 6-8 Mart 2024. https://doi.org/10.35096/othr/pub-7096
  • Singil, N. (2022). Yapay zekâ ve insan hakları. PPIL. Advanced online publication. https://doi.org/10.26650/ppil.2022.42.1.970856
  • Software Emulation of a Hardware Voice Synthesiser. (2017). Hawkings Voice. https://hawkingsvoice.com/
  • Stroebel, L., Llewellyn, M., Hartley, T., Ip, T. S., ve Ahmed, M. (2023). A systematic literature review on the effectiveness of deepfake detection techniques. Journal of Cyber Security Technology, 7(2), 83–113. https://doi.org/10.1080/23742917.2023.2192888
  • Sun, C., Jia, S., Hou, S., ve Lyu, S. (2023). AI-synthesized voice detection using neural vocoder artifacts. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 904–912. https://doi.org/10.1109/CVPRW59228.2023.00097
  • Swenson, A., ve Weissert, W. (2024, 23 Ocak). New Hampshire investigating fake Biden robocall meant to discourage voters ahead of primary. AP News. https://apnews.com/article/new-hampshire-primary-biden-ai-deepfake-robocall-f3469ceb6dd613079092287994663db5
  • Tahaoğlu, G., Kılıç, M. Üstübioğlu, B. Ve Ulutaş, G. (2024). Derin sahte ses manipülasyonu tespit sistemleri üzerine bir derleme. Yüzüncü Yıl Üniversitesi Fen Bilimleri Enstitüsü Dergisi. 29(1), 353-402. https://doi.org/10.53433/yyufbed.1358880
  • Tenbarge, K. (2024, 21 Mayıs). Scarlett Johansson says she was 'shocked, angered' when she heard OpenAI's voice that sounded like her. NBC News. https://www.nbcnews.com/tech/tech-news/scarlett-johansson-shocked-angered-openai-voice-rcna153180
  • Thomas, S. (2024). AI and actors: Ethical challenges, cultural narratives and industry pathways in synthetic media performance. Emerging Media, 2(3), 523–546. https://doi.org/10.1177/27523543241289108
  • Toktay, Y. ve Güven, A. (2025). Dezenformasyonla mücadele blok zincir (blockchain) teknolojisi. Etkileşim. 15, 100-122. https://doi.org/10.32739/etkilesim.2025.8.15.285
  • Ulusal Yapay Zekâ Stratejisi 2021-2025 (2021). Cumhurbaşkanlığı Dijital Dönüşüm Ofisi Başkanlığı ile Sanayi ve Teknoloji Bakanlığı https://cbddo.gov.tr/uyzs
  • Ulusoy, H., ve Kaya İlhan, Ç. (2025). Siyasal iletişimde yapay zekâ etkisi ve deepfake (derin sahte) dezenformasyonu: 2024 ABD başkanlık seçimleri örneği. TRT Akademi, 10(23), 42–73. https://doi.org/10.37679/trta.1563828
  • Williams, R. (2024). Voice in the machine: AI voice cloning in film. Art Style | Art & Culture International Magazine, 13, 129–144. https://doi.org/10.5281/zenodo.10443451
  • Westerlund, M. (2019). The emergence of deepfake technology: A review. Technology Innovation Management Review, 9(11), 40–53. https://doi.org/10.22215/timreview/1282
  • Widodo, W., ve Bakir, H. (2024). Legal certainty of limitations on the use of artificial intelligence (AI) voice cloning in songs and music as a form of protection of musicians' copyrights. Proceedings of the 4th International Conference on Law, Social Sciences, Economics, and Education, Endonezya. http://doi.org/10.4108/eai.25-5-2024.2349353
  • Yapay Zekâ Yasası (2024). Eu ai act. European Union. https://artificialintelligenceact.eu/the-act/
  • Yıldırım Köse, M. ve Ayçıl Altınok, G. (2025). Yapay zekâ ve müzik sektörü. https://gun.av.tr/tr/goruslerimiz/makaleler/yapay-zeka-ve-muzik-sektoru-1
  • YZ Konseyi Tavsiyesi (2019). Recommendation of the council on artificial intelligence, OECD. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449
  • Yünlü, S. (2024). Üretici yapay zekâ kaynaklı norm ve kişi bazlı hukuki sorumluluk. Adalet Dergisi (72), 501-542. https://doi.org/10.57083/adaletdergisi.1484067
  • Zhang, B., Cui, H., Nguyen, V., ve Whitty, M. (2025). Audio deepfake detection: what has been achieved and what lies ahead. Sensors, 25(7), 1989. https://doi.org/10.3390/s25071989
There are 103 citations in total.

Details

Primary Language Turkish
Subjects Communication and Media Studies (Other)
Journal Section Articles
Authors

Recep Ünal 0000-0001-6181-6255

Publication Date July 20, 2025
Submission Date April 28, 2025
Acceptance Date July 10, 2025
Published in Issue Year 2025 Volume: 34 Issue: Uygarlığın Dönüşümü - Sosyal Bilimlerin Bakışıyla Yapay Zekâ

Cite

APA Ünal, R. (2025). “DUYULAN BENİM SESİM AMA KONUŞAN BEN DEĞİLİM”: ETİK VE HUKUKİ TARTIŞMALAR BAĞLAMINDA YAPAY ZEKÂ ÇAĞINDA SES KLONLAMA. Çukurova Üniversitesi Sosyal Bilimler Enstitüsü Dergisi, 34(Uygarlığın Dönüşümü - Sosyal Bilimlerin Bakışıyla Yapay Zekâ), 322-340. https://doi.org/10.35379/cusosbil.1686135