Journal of ROL Sport Sciences acknowledges that generative artificial intelligence and AI-assisted technologies, when used responsibly, can help researchers work efficiently, gain critical insights quickly, and achieve better outcomes. These technologies assist researchers in synthesizing complex literature, providing an overview of a field or research question, identifying research gaps, generating ideas, and offering tailored support for tasks such as content organization and improving language and readability. However, while these opportunities may be transformative, they cannot replicate human creativity and critical thinking. Within this framework, the policy of Journal of ROL Sport Sciences regarding the use of AI technology aims to support authors, reviewers, and editors in making sound judgments about the ethical use of such technologies.
Artificial Intelligence (AI) Policy for Authors
Authors must not list AI tools as an author or co-author, nor cite AI tools as an author. Authorship refers to responsibilities and tasks that can only be undertaken and fulfilled by humans, and authors are responsible and accountable for the content of their work. This responsibility includes:
- Carefully reviewing and verifying the accuracy, comprehensiveness, and impartiality of all AI outputs (including checking sources, since references generated by AI may be incorrect or fabricated).
- Editing and adapting all material thoroughly to ensure that the manuscript represents the author’s authentic and original contribution and reflects their own analysis, interpretation, perspectives, and ideas.
- Ensuring that all tools or sources used, whether AI-based or not, are clearly and transparently disclosed to readers, with a disclosure statement required upon submission.
- Ensuring that the manuscript is developed in a way that safeguards data privacy, intellectual property, and other rights, by checking the terms and conditions of any AI tool used.
Authors should consider the following uses of AI tools as appropriate, provided they are disclosed in the methods or acknowledgements section:
- Assistance with literature review or compilation of relevant sources
- Translation of materials during the research process
- Use of AI-generated software or code in conducting additional research
- Assistance with research data visualization
- Production of representative illustrations or infographics
- Code development or error-checking with AI assistance
- Assistance with compiling references
Inappropriate uses of AI tools include:
- Generation of incorrect text or content
- Creation of data or submissions using a series of prompts
- Conducting interviews with AI tools in place of participants for qualitative research
- Analysis of experiences and themes
- Plagiarism or inappropriate attribution to prior sources
- Generation of artificial images presented as original or novel research images
- Fabricated references or falsified claims
- Use of AI tools in editorial work or peer review processes
Submitted works will not be rejected solely due to the disclosed use of generative AI; however, if the editor becomes aware that generative AI has been used inappropriately and without disclosure in the preparation of a submission, the editor reserves the right to reject the work at any stage of the publishing process.
Artificial Intelligence (AI) Policy for Reviewers
- The evaluation of a scientific manuscript entails responsibilities that can only be attributed to humans; therefore, generative artificial intelligence or AI-assisted technologies must not be used in the peer review process. The critical thinking and original assessment required for peer review are beyond the scope of AI technologies, which carry the risk of producing incorrect, incomplete, or biased outcomes.
- When a researcher is invited to review another researcher’s manuscript, the manuscript must be treated as a confidential document. Reviewers must not upload the submitted manuscript or any part of it into a generative AI tool, as this may violate the confidentiality and proprietary rights of the authors.
- Uploading peer review reports into AI tools is also inappropriate, as such reports may contain author information. For this reason, reviewers must not upload their reports into AI tools, even if only for the purpose of improving language or readability.
- The reviewer is responsible and accountable for the content of the review report.
- Reviewers who inappropriately generate review reports using AI tools will not be invited to review for the journal, and their reviews will not be included in the final decision.
- If reviewers suspect inappropriate or undisclosed use of generative AI in a manuscript, they should raise their concerns with the journal editor. If editors suspect the use of AI in a submitted manuscript or review, they should take this policy into account when conducting editorial assessment and consider the reviewer’s report accordingly.
Artificial Intelligence (AI) Policy for Editors
- A submitted manuscript must be treated as a confidential document. Editors must not upload the manuscript or any part of it into a generative AI tool, as this may violate the confidentiality and proprietary rights of the authors.
- Editors must not upload the manuscript or any part of it into AI tools even for the purpose of improving language or readability.
- Managing the editorial evaluation of a manuscript entails responsibilities that can only be attributed to humans; therefore, AI-assisted technologies must not be used by editors to aid in the evaluation or decision-making process of a manuscript.
- The editor is responsible and accountable for the editorial process, the final decision, and its communication to the authors.
- Editors may, however, use AI tools to assist in identifying suitable reviewers for the manuscript.