DFG Perspectives on the Role of Artificial Intelligence in Research Practice, Part I

DFG Perspectives on the Role of Artificial Intelligence in Research Practice, Part I

How significant are developments in artificial intelligence for open access and open research? With a series of blog posts, the project open-access.network wants to respond to recent questions and discussions about the relationship between these subject areas.

In the coming weeks, we will be publishing in this series three DFG blog posts dealing with the significance of artificial intelligence for good research practice and open science:

  • Part I: The Role of Artificial Intelligence in Research Practice 
  • Part II: The Relations Between Open Science and Artificial Intelligence
  • Part III: Data Tracking and Artificial Intelligence 

Part I: DFG Perspectives on the Role of Artificial Intelligence in Research Practice

The rapid development of generative artificial intelligence (AI) is increasingly changing research practice. The Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) has also issued statements addressing the impact of these technologies on research. With a strategic funding initiative in the field of AI, it promotes the use of AI methods. Launched in 2019, this initiative is driving the field of AI research forward through calls for proposals and other targeted measures. The focus here is on a differentiated view of AI: The DFG recognizes its great potential while at the same time warning of the risks that must be seen.

The DFG sees generative AI as a useful tool that can support a wide range of processes in everyday research practice. Opportunities include, for example, efficiency gains: AI can take on repetitive tasks such as summarizing scientific literature or creating visualizations, thereby saving valuable time. There is also linguistic support: Especially for researchers who do not publish in German or English, AI can provide support with drafting and editing. And finally, AI enables new methodological approaches: AI-supported procedures open up new research approaches in data analysis, modelling, and simulation.

In addition to the opportunities, the DFG identifies clear risks and calls for the responsible use of generative AI. On the subject of transparency, anyone who uses AI must disclose the extent to which this was done and for what purpose. This is crucial for the verifiability of research. Another keyword is responsibility. Responsibility for scientific content remains with the researchers – even if parts of this content were created by AI. Authorship is and remains human. There is also a risk of plagiarism. As AI systems might reproduce protected content unnoticed, it is essential to check for copyright infringements. Further, we see risks when it comes to confidentiality. The handling of confidential proposal documents is especially sensitive. For this reason, the use of generative AI in DFG review procedures is currently not permitted. Another key topic is quality assurance: AI-generated content must be professionally checked, corrected, and, if necessary, revised – there is no place for blind faith here.

The DFG currently assesses the use of AI neutrally – context and responsible use are decisive. AI can be a valuable aid, but it cannot replace the critical thinking, experience, and ethical responsibility of researchers. Transparency, integrity, and quality remain the guiding principles of research practice – even in the age of AI.

AI Research

The DFG promotes research in all disciplines and in all its forms. As funding is research-driven, the choice of research topics is the responsibility of the researchers. Given the boom in AI recent years, it is only natural that the DFG funds numerous AI-related projects.

Excellent research on AI will expand society’s knowledge of AI and increase our ability to deal with AI. This applies not least to critical aspects of AI and to its ethical, political, and social implications. The DFG’s core mission is to promote research. Nonetheless, within the framework of “external” scientific communication, the DFG also endeavours to influence larger parts of society and increase knowledge of AI there.1 In addition, through the transfer of knowledge between science and industry, findings from basic research can make a concrete contribution to the development of AI-based products in various sectors.

AI and Good Research Practice

The question of the extent to which AI technologies can help to “uphold” the rules of good research practice or comply with them is too narrow. Rather, we should assume that there are reciprocal influences.

On the one hand, it goes without saying that the use of AI applications does not release researchers from their obligation to comply with central fundamental principles of scientific integrity codified, for example, in the DFG Guidelines for Safeguarding Good Research Practice (DFG, 2025). Thus, in connection with the use of generative AI (chatbots and the like) when drafting scientific texts, the question of authorship arises, for example. The DFG takes the view that “only the natural persons responsible can appear as authors in research publications” (DFG, 2023, p. 2).

On the other hand, it is by no means inconceivable that the development of AI technologies and their practical application in research will in turn contribute to a change in the standards of research practice – just as changing work methods in research in the course of the digital transformation were instrumental in bringing about the current version of the DFG Code of Conduct (DFG, 2025). Research practice, on the one hand, and normative discussions and rule-making, on the other, are reciprocally interrelated.


For example, the event “Mensch und Maschine – Wie Künstliche Intelligenz uns verändert” [Humans and Machines – How Artificial Intelligence Is Changing Us] (26.11.2024) in the talk series “Enter Science”. DFG - Deutsche Forschungsgemeinschaft - Mensch und Maschine – Wie Künstliche Intelligenz uns verändert.


References


Zitiervorschlag

Bilic-Merdes, M., Brandt, S., & Lentze, M. (2025). Perspektiven der DFG auf KI und Open Access, Teil I. Die Rolle Künstlicher Intelligenz in der wissenschaftlichen Praxis. open-access.network. doi.org/10.64395/tp75r-24v13.


This article is licensed under the Creative Commons Attribution 4.0 International Licence (CC BY 4.0).


Write comment

* These fields are required
Comments will only be published after prior review by the editorial team. open-access.network reserves the right to delete comments or close the comment function if the netiquette is violated or the comment function is misused.

Comments

No Comments

Last updated on

More information on the topics of this page: