Artificial Intelligence (AI) Policy
Journal supports the responsible use of Generative AI tools that respect high standards of data security, confidentiality, and copyright protection in cases such as:
- Idea generation and idea exploration
- Language improvement
- Interactive online search with LLM-enhanced search engines
- Literature classification
- Coding assistance
1. Purpose and Scope
This policy establishes guidelines for the ethical, transparent, and responsible use of artificial intelligence (AI) and AI-assisted technologies in a scholarly journal's operations, publishing, and editorial processes. The policy applies to authors, reviewers, editors, and journal staff.
2. Guiding Principles
a. Transparency: All uses of AI in research, writing, reviewing, or publishing must be disclosed explicitly to ensure clarity about its role.
b. Accountability: The responsibility for the integrity of the content lies with the authors and editorial team, irrespective of AI involvement.
c. Ethical Integrity: AI applications should comply with ethical standards, including respecting patient privacy, avoiding bias, and upholding scientific rigor.
d. Compliance with Standards: AI use must align with international and institutional policies, including COPE (Committee on Publication Ethics) guidelines.
3. Use of AI in Manuscript Preparation
a. Authorship: AI tools cannot be listed as an author. Only humans who meet authorship criteria as defined by the journal’s guidelines can be credited.
b. AI-Assisted Writing: Authors must disclose the use of AI tools (e.g., language editing or data analysis tools) in the Methods or Acknowledgment sections.
c. Originality: Authors must ensure that AI-generated content does not include plagiarized material and is adequately verified for accuracy.
4. Use of generative AI and AI-assisted tools in figures, images and artwork
We do not permit using Generative AI or AI-assisted tools to create or alter images in submitted manuscripts. This may include enhancing, obscuring, moving, removing, or introducing a specific feature within an image or figure. Adjustments of brightness, contrast, or color balance are acceptable if and as long as they do not obscure or eliminate any information present in the original. Image forensics tools or specialized software might be applied to submitted manuscripts to identify suspected image irregularities. The only exception is if the use of AI or AI-assisted tools is part of the research design or research methods (such as in AI-assisted imaging approaches to generate or interpret the underlying research data, for example in the field of biomedical imaging). If this is done, such use must be described in a reproducible manner in the methods section. This should include an explanation of how the AI or AI-assisted tools were used in the image creation or alteration process, and the name of the model or tool, version and extension numbers, and manufacturer.
5. Use of AI in Manuscript Submission and Review Processes
a. Submission Package Verification: This functionality scans all submitted documents, including the manuscript file, tables, and figure files, to ensure consistency with the instructions for authors. It identifies and lists any missing elements.
b. AI-Assisted Language Quality Evaluation: This tool evaluates the quality of manuscript writing and comprehension. Authors receive detailed recommendations for text improvements to ensure manuscripts meet high linguistic standards before peer review.
c. Manuscript Technical Review: AI-powered algorithms assess manuscripts for completeness, clarity, and conciseness. The review ensures that vital sections such as the abstract, introduction, methodology, results, discussion, and references are present and developed. This process aims at the elimination of verbosity, enhancement of readability, and ensures technical and formal quality before scientific review.
d. Reference Check: AI tools check for duplicate references and verify compliance with the journal’s required citation style.
e. Abstract Refinement: This feature helps authors and editors refine abstracts to meet a 300-word limit while preserving structure and meaning. Authors must approve AI-powered edits.
f. Paper-Mill Detection: AI-driven tools identify manuscripts potentially originating from paper mills. By analyzing writing style, data consistency, literature integration, and author profiles, the tool assigns a score from 1 to 10 and provides justifications. This helps detect unethical submissions early and ensures the integrity of published research.
g. Verification Reports: The verification and check reports are presented to the authors for appropriate corrections or approval.
6. Use of AI in Peer Review and Editorial Processes
a. Editorial Use: Editors may use AI tools to assist in plagiarism detection, language editing, or identifying suitable reviewers. Such use must not replace human judgment.
b. Reviewer Use: Reviewers may use AI tools for language assistance but must not rely on AI to generate review content. Any such use should be disclosed to the editorial team.
c. Bias and Fairness: AI tools used in editorial decisions must be validated to ensure they do not introduce bias against specific demographics, geographies, or research disciplines.
7. Data Privacy and Security
a. Data Protection: All data shared with AI tools must comply with data protection regulations, such as GDPR or HIPAA, to safeguard patient and author information.
b. Third-Party Tools: Any third-party AI tools used by the journal must undergo rigorous vetting to ensure compliance with privacy and security standards.
8. Ethical Review and Compliance
a. Monitoring: The journal will regularly review AI applications to ensure compliance with ethical and scientific standards.
b. Policy Violations: Non-compliance with the AI policy may result in actions ranging from manuscript rejection to reporting to the author’s institution.
9. Education and Support
a. Training: The journal will provide resources and training to authors, reviewers, and editors on the responsible use of AI in scholarly publishing.
b. Feedback Mechanisms: Stakeholders can provide feedback on AI-related practices to continuously improve the policy.
10. Policy Review
This AI policy will be reviewed and updated periodically to reflect advancements in AI technology, ethical standards, and scholarly publishing practices.
Last edited: February 9, 2025