Policy update regarding the use of Large Language Models (LLMs)

Effective as of 23 March 2026.

We are updating our policy on the use of large language models (LLMs) in work submitted to, or prepared for, the Review of Finance.

Referee reports

Like all academic journals, our peer review process depends on the expertise, judgment, and integrity of the scholars who serve as referees. When we invite someone to review a paper, we are asking for their own professional assessment.

LLMs may be used in a supportive way, but they must not be used as a substitute for a referee’s own evaluation, interpretation, or recommendations. The substance of every report should reflect the referee’s personal analysis and conclusions.

Submitted manuscripts

Authors are responsible for the originality, accuracy, and integrity of their work. LLMs must not replace an author’s intellectual contribution, analysis, or interpretation.

AI tools may be used in a supportive role, for example, to improve language, assist with coding, or support data handling, but their use must be transparent.

The Review of Finance follows Oxford University Press ethics guidelines, including the requirement that if AI tools have been used in any capacity (for example, in content generation, coding, or data analysis), this must be disclosed in the cover letter and acknowledged in the manuscript (for example, in the Methods or Acknowledgements section).

Scroll to Top