AI Usage Policy

1. General Provisions

This policy establishes the principles, rules, and limitations governing the use of artificial intelligence (AI) and AI-assisted technologies in the preparation, submission, peer review, and editorial processing of manuscripts.

The policy aims to ensure:

  • academic integrity;
  • transparency of research practices;
  • compliance with international ethical standards;
  • reliability and reproducibility of research results.

2. Legal and Ethical Framework

This policy is based on:

  • the Law of Ukraine No. 4742-IX “On Academic Integrity” (effective from 1 February 2026);
  • the principles of the Committee on Publication Ethics (COPE);
  • policies of Elsevier and Springer Nature on AI use.

In accordance with applicable regulations, undisclosed or unethical use of AI constitutes a violation of academic integrity, comparable to plagiarism.


3. Core Principles of AI Use

The use of AI is permitted only if the following principles are respected:

  1. Transparency — full disclosure of AI use is mandatory;
  2. Accountability — authors bear full responsibility for all content;
  3. Verification — all AI-assisted outputs must be validated by humans;
  4. Justification — AI use must be methodologically justified;
  5. Reproducibility — results must be verifiable and, where applicable, reproducible.

4. Use of AI by Authors

Authors may use AI tools strictly as auxiliary instruments for:

  • language editing and proofreading;
  • structuring and organizing content;
  • preliminary information analysis;
  • processing large datasets (subject to verification).

Prohibited practices:

  • submitting AI-generated content as original work without disclosure;
  • generating scientific conclusions without validation;
  • listing AI tools as co-authors.

5. AI Disclosure Statement (Mandatory Section)

All manuscripts must include a dedicated section titled:

AI Disclosure Statement

During the preparation of this manuscript, the author(s) used artificial intelligence tools:
Tool name (version, developer):
Purpose of use:
Stage of use:

The author(s) confirm that all outputs were verified and that the final interpretations and conclusions are solely those of the author(s).

Additional requirements (for advanced use):

  • description of prompt logic;
  • explanation of validation procedures;
  • disclosure of limitations of AI use.

6. Levels of AI Involvement

To standardize editorial assessment, three levels of AI use are defined:

Low involvement

  • grammar correction, translation, formatting;
  • minimal risk;
  • simplified disclosure required.

Moderate involvement

  • literature structuring;
  • assisted drafting;
  • preliminary data analysis;

Requirements:

  • full disclosure;
  • methodological explanation;
  • validation statement.

High involvement

  • generation of substantial text segments;
  • automated data analysis;
  • modeling or predictive analytics.

Requirements:

  • detailed methodology;
  • reproducibility of results;
  • additional peer review (if required).

7. Use of AI by Reviewers

  • The use of generative AI tools in the preparation of peer review reports is not permitted.
  • Reviewers are fully responsible for the content, objectivity, and confidentiality of their reviews.

8. Use of AI by Editors

  • Editors must not upload submitted manuscripts or any part thereof into generative AI tools.
  • If a potential violation is suspected, the editor must initiate an investigation in accordance with this policy.

9. Policy Violations

Violations include, but are not limited to:

  • undisclosed use of AI;
  • submission of AI-generated text as original work;
  • generation of conclusions without validation;
  • use of AI in peer review;
  • manipulation of research outputs using AI.

10. Misconduct Investigation Procedure (Workflow)

  1. Detection (editor, reviewer, system, or third party);
  2. Preliminary assessment;
  3. Author inquiry (request for clarification and supporting materials);
  4. Editorial evaluation;
  5. Decision-making;
  6. Documentation and record-keeping.

Possible outcomes:

  • no violation identified;
  • revision required;
  • rejection of manuscript;
  • retraction (post-publication);
  • notification of the author’s institution.

Standard of evaluation: balance of probabilities (in line with COPE recommendations).


11. Responsibilities

Authors

  • are fully responsible for the accuracy, originality, and integrity of their work.

Reviewers

  • are responsible for independent, objective, and confidential evaluation.

Editors

  • are responsible for enforcing this policy and ensuring ethical compliance.

12. Acceptable and Unacceptable Practices

Acceptable:

  • use of AI for technical support (editing, translation);
  • AI-assisted data analysis with verification;
  • idea generation followed by independent development.

Unacceptable:

  • undisclosed AI use;
  • automated generation of scientific conclusions;
  • reliance on AI without validation.

13. Sanctions

In cases of violation, the journal may apply:

  • manuscript rejection;
  • retraction of published articles;
  • notification of affiliated institutions;
  • temporary or permanent submission restrictions.

14. Alignment with International Standards

This policy aligns with:

  • COPE — guidance on AI tools and authorship;
  • Elsevier — policy on AI-assisted writing;
  • Springer Nature — AI policy framework.

Key principles:

  • AI cannot be listed as an author;
  • responsibility remains with human authors;
  • transparency is mandatory.

15. Limitations of AI Governance

The editorial board acknowledges:

  • the difficulty of accurately detecting AI-generated content;
  • limitations of AI detection tools;
  • potential subjectivity in assessments;
  • the rapidly evolving nature of AI technologies.

16. Final Provisions

Artificial intelligence is recognized as a supporting tool, not a substitute for human intellectual contribution.

Compliance with this policy is a mandatory condition for publication.