Artificial Intelligence (AI) and Automated Tools Policy

The International Journal of Health Concord (IHC) requires authors, peer reviewers, and editors to follow clear guidelines regarding the use of automated tools, including generative artificial intelligence (AI) such as large language models (LLMs), to maintain integrity, transparency, and scientific quality.

For Authors

Disclosure: Authors must disclose any use of generative AI beyond routine language correction, formatting, or editing.
Responsibility: Authors remain responsible for verifying the accuracy and validity of all content generated by automated tools.
Authorship and Citation: AI tools cannot be listed as authors, and generative AI outputs cannot be cited as sources.

For Peer Reviewers and Editors

Prohibition: Peer reviewers and editors must not use generative AI to produce peer review reports due to risks of confidentiality breaches, bias, superficial feedback, hidden prompts, and false information (e.g., fake references).
Editing: Use of AI for editing or rewriting is acceptable only if clearly disclosed.

For the Journal

Transparency: Any routine use of automated tools by the journal or publisher must be transparently disclosed.
Testing: All automated tools must be appropriately tested prior to use.
Human Oversight: Human-in-the-loop review is mandatory to ensure reliability and integrity of automated processes, including detection of text similarity, figure manipulation, duplication, undeclared AI use, and automated peer reviewer suggestions.

This policy ensures that the publication process at IHC upholds ethical standards, transparency, and high-quality scientific publishing practices.