For Authors
This policy aims to inform authors about the ethical and responsible use of the artificial intelligence (AI) technologies in the writing and analysis of academic papers. Grounded in principles of data privacy, academic integrity, and ethical standards, it seeks to enhance the reliability of scientific research while ensuring respect for the rights of individuals and communities. Within this framework, researchers must explicitly disclose which AI functions were used, at what stages of their research, and to what extent. The versions and technical specifications of the AI tools employed should be detailed. Scientific research should prioritize benefiting humanity, ensuring that generated knowledge is free from bias and intentional errors and that scientific findings serve the greater good. All stakeholders in the research process, including data-providing individuals or communities, other living beings, and the environment, must be treated fairly and respectfully. Researchers are responsible for the outcomes generated by AI technologies and must remain accountable for these results. It is their legal and ethical duty to verify the impartiality, reliability, and accuracy of AI-generated outputs. The use of AI tools should not be included in the references section but must be explicitly stated in the ethics statement of the paper.
Authors may use generative AI and AI-assisted technologies solely to enhance the readability and language quality of their work. Such tools must be employed under human oversight, as AI-generated content can be inaccurate, incomplete, or biased. Authors remain fully responsible for the content of their work. Any use of AI or AI-assisted technologies must be clearly disclosed in the manuscript, and this disclosure will be noted in the published article. Transparency in AI use fosters trust among authors, readers, reviewers, editors, and other contributors. AI and AI-assisted technologies must not be listed as authors or co-authors, nor cited as sources. Authorship entails responsibilities that only humans can fulfill, such as verifying accuracy, ensuring originality, approving the final version, and complying with publishing ethics.
Use of AI in Figures, Images, and Artwork:
The use of generative AI or AI-assisted tools to create or manipulate images in submitted manuscripts is not permitted. This includes altering features within an image (e.g., enhancing, removing, obscuring). Basic adjustments to brightness, contrast, or color are acceptable only if they do not conceal or distort any original information. Forensic tools may be used to detect potential image manipulation. An exception is allowed if AI is part of the research methodology. In such cases, authors must describe the AI use in a reproducible manner in the methods section, including details like tool name, version, and manufacturer. Authors must comply with the tool’s usage terms and may be asked to provide pre-AI or raw image data for editorial review.
Use in Artistic Content:
Generative AI must not be used to produce visual content such as graphical abstracts. For cover art, AI-generated content may be permitted only with prior approval from the editor and publisher, if all necessary rights have been cleared and proper attribution is ensured.
For Reviewers
When a researcher is invited to review another researcher’s manuscript, the submitted paper must be treated as a confidential document. Reviewers should not upload the manuscript, in whole or in part, to generative AI tools, as doing so may violate the author’s confidentiality and intellectual property rights. If the manuscript contains personal data, this may also constitute a breach of data privacy rights. This confidentiality obligation also applies to the peer review report, as it may include sensitive information related to the manuscript and its authors. Therefore, peer review reports should not be uploaded to AI tools, even for the sole purpose of improving language or readability. The peer review process is a cornerstone of the scientific ecosystem, and Elsevier adheres to the highest ethical standards in this regard. Reviewing a scientific manuscript involves responsibilities that can only be carried out by humans. Generative AI tools lack the critical thinking and original judgment required for scientific evaluation and may produce inaccurate, incomplete, or biased assessments. Reviewers are personally responsible and accountable for the entire content of their review reports. Authors are permitted to use generative AI tools before submission, but only to enhance the language and readability of their manuscripts, and such use must be clearly disclosed. Reviewers can find these disclosures in a separate section at the end of the manuscript, just before the references. It is essential that any AI-based technologies used to support the editorial process are implemented with attention to confidentiality and data protection principles.
For Editors
A submitted manuscript must be treated as a confidential document. Editors should not upload the manuscript, in whole or in part, to generative AI tools, as doing so may violate the author’s confidentiality, proprietary rights, and—if applicable—personal data protection. This obligation of confidentiality also applies to all communications related to the manuscript (e.g., notification and decision letters), which must likewise not be uploaded to AI tools. Editorial evaluation involves responsibilities that can only be carried out by humans. Generative AI lacks the critical thinking and independent judgment required for this process and may produce inaccurate, incomplete, or biased results. Editors are responsible and accountable for the evaluation process, the final decision, and its communication to the authors. Authors may use generative AI tools before submission solely to improve the language and readability of their manuscripts, and such use must be clearly disclosed. Editors can find this disclosure in a separate section at the end of the manuscript, just before the references. If an editor suspects that an author or reviewer has violated these rules, they should inform the publisher. AI-based technologies that support the editorial process should be used with care and must respect confidentiality and data protection rights.
Sources:
1. Turkish Republic Higher Education Council Higher Education Institutions Ethical Guide on the Use of Generative AI in Scientific Research and Publication Activities (for English Summary click here)
2. Elsevier Generative AI policies for journals
This journal is a member of and subscribes to the principles of the Committee on Publication Ethics. |
IBAD Journal of Social Sciences I (online) ISSN 2687-2811