ISSN 2786-491X
e-ISSN 3083-7472
UDC 1+33+34
Submit an article

Generative AI Policy

Generative Artificial Intelligence (AI) tools, such as large language models (LLMs) or multimodal models, continue to develop and evolve, including in their application for businesses and consumers.

We welcome the new opportunities offered by Generative AI tools, particularly in areas such as: enhancing idea generation and exploration, supporting authors in expressing content in a non-native language, and accelerating the research and dissemination process.

The journal provides guidance to authors, editors, and reviewers on the use of such tools, which may evolve given the rapid development of the AI field.

Generative AI tools can produce a diverse range of content, including text generation, image synthesis, audio, and synthetic data. Examples of such tools include ChatGPT, Copilot, Gemini, Claude, NovelAI, Jasper AI, DALL-E, Midjourney, Runway, etc.

While generative AI has enormous potential to enhance creativity for authors, there are certain risks associated with the current generation of Generative AI tools.

Some of the risks associated with the current functioning of Generative AI tools are:

  • Inaccuracy and bias: Generative AI tools are statistical (as opposed to factual) in nature, and as such, they may introduce inaccuracies, falsehoods (so-called "hallucinations"), or bias, which can be difficult to detect, verify, and correct.
  • Lack of attribution: Generative AI often does not adhere to the standard practice of the global scholarly community for the correct and accurate attribution of ideas, quotes, or citations.
  • Confidentiality and intellectual property risks: Currently, generative AI tools are often used on third-party platforms that may not provide sufficient standards of confidentiality, data security, or copyright protection.
  • Unintended use: Generative AI providers may reuse input or output data from user interactions (e.g., for AI training). This practice could potentially infringe on the rights of authors and publishers, among others.

Authors

Authors are responsible for the originality, validity, and integrity of the content of their submissions. When choosing to use Generative AI tools, journal authors are expected to do so responsibly and in accordance with our editorial policies on authorship and principles of publishing ethics. This includes reviewing the outputs of any Generative AI tools and confirming the accuracy of the content.

The journal supports the responsible use of Generative AI tools that comply with high standards of data security, confidentiality, and copyright protection, particularly in the following cases:

  • Language improvement
  • Interactive online search with AI-enhanced search engines
  • Literature classification

Authors are responsible for ensuring that the content of their submissions meets the required standards of rigorous scientific and scholarly assessment, research, and validation, and is created by the author.

Generative AI tools should not be listed as authors, as such tools cannot assume responsibility for the submitted content or manage copyright and licensing agreements. Authorship requires taking responsibility for content, consenting to publication via a publishing agreement, and providing contractual assurances about the integrity of the work, among other principles. These are uniquely human responsibilities that cannot be undertaken by Generative AI tools.

Authors must clearly acknowledge the use of Generative AI tools within the article or book via a statement that includes: the full name of the tool used (with version number), how it was used, and the reason for its use. For article submissions, this statement must be included in the "Materials and Methods" or "Acknowledgments" section. Such transparency allows editors to assess whether Generative AI tools have been used and whether they have been used responsibly. The journal retains discretion over the publication of the work to ensure that integrity and standards are upheld.

If an author intends to use an AI tool, they must ensure that the tool is appropriate and robust for their proposed use, and that the terms applicable to the tool provide sufficient safeguards and protections, for example, regarding intellectual property rights, confidentiality, and security.

Authors should not submit manuscripts where Generative AI tools have been used in ways that replace core researcher and author responsibilities. Such cases may be subject to editorial investigation.

The journal does not permit the use of generative AI for the creation and manipulation of images or original research data in published articles.

The use of Generative AI and AI-assisted technologies in any part of the research process should always be undertaken with human oversight and transparency.

Editors and Reviewers

We strive for the highest standards of editorial integrity and transparency. Editors’ and peer reviewers’ use of manuscripts in Generative AI systems may pose risks to confidentiality, proprietary rights, and data, including personally identifiable information. Therefore, editors and peer reviewers must not upload files, images, or information from unpublished manuscripts into Generative AI tools. Failure to comply with this policy may infringe upon the rights of intellectual property holders.

Editors

Editors are responsible for the quality and integrity of research content. Therefore, editors must maintain the confidentiality of submissions and the peer review process.

The use of manuscripts in Generative AI systems may give rise to risks around confidentiality, infringement of proprietary rights, and data, among other risks. Therefore, editors must not upload unpublished manuscripts, including any associated files, images, or information, into Generative AI tools.

Editors should check with their journal contact at Taylor & Francis before using any Generative AI tools unless they have already been informed that the tool and the proposed use of the tool are authorised. Journal editors should refer to our Editor Resource page for more information on our code of conduct.

Peer Reviewers

Peer reviewers are selected experts in their field and should not use Generative AI for analysis or to summarise submitted articles or portions thereof in the creation of their reviews. Therefore, peer reviewers must not upload unpublished manuscripts or project proposals, including any associated files, images, or information, into Generative AI tools.

Generative AI may only be used to assist with improving review language, but peer reviewers will always remain responsible for ensuring the accuracy and integrity of their reviews.