The integration of large language models (LLMs) such as chatbots into scientific writing has unveiled a complex and multifaceted dialogue surrounding authorship, knowledge production, and ethical responsibility. A recent study by Calderon and Herrera (2025) delves deeply into this conversation, framing the interaction between human scientists and AI chatbots as a polyphonic exchange—a layered chorus blending human insight with machine-generated content. This innovative metaphor illuminates the evolving dynamics of scientific communication in an age increasingly mediated by artificial intelligence, foregrounding both opportunities and challenges that demand urgent reflection.
At the heart of this discourse lies a distinction echoing the classical philosophy of Socrates. The researchers draw on Socrates’ differentiation between rhetorical persuasion and epistemic truth, contrasting the “maker of speeches,” whose intent is to convince, with the writer grounded in knowledge of the truth. This ancient dichotomy becomes highly relevant when considering chatbot outputs. While human scientists contribute rigorously verified knowledge—termed episteme—the chatbot voices often mirror doxa, or public opinion, shaped by probabilistic models rather than direct engagement with factual truth. This dissonance exposes the risks of illusory knowledge entering scientific texts through uncritical chatbot-assisted writing.
For novices or those less versed in scientific methodologies, the use of chatbots may engender a dangerous illusion of wisdom. The false sense of competence gained by accepting machine-generated assertions without scientific scrutiny can fuel misinformation and undermine the rigorous epistemic standards of research. Calderon and Herrera emphasize that learning without a knowledgeable teacher equates to practicing science without the necessary critical counterbalance, thereby limiting genuine knowledge advancement and amplifying doxa rather than fostering sophia, or true wisdom.
However, the authors do not dismiss the potential for symbiotic interactions between humans and machines. A skilled scientist, capable of harmonizing the distinct yet complementary voices of human reasoning and algorithmic processing, may produce a richer and more robust scientific narrative. This “harmonic song” acknowledges the productive contributions that chatbots can make, especially in routine or repetitive scientific tasks, while underscoring the necessity of transparency and ethical vigilance to prevent the overshadowing of authentic human intellectual labor.
Facing the unstoppable momentum of AI integration in research, the authors argue that total exclusion of chatbots from scientific authorship is impractical. Recent empirical data indicate that a majority of researchers already employ chatbots extensively, often without disclosure. Such secrecy generates significant fairness and legal concerns for academic publishers and peer reviewers. Attempting to assume that authors are either fully transparent or entirely abstaining from chatbot use is no longer viable given the current landscape of scientific production.
Compounding this dilemma is the current technological challenge of reliably detecting AI-generated texts within scholarly articles. Unlike student essays—where preliminary detection tools are being explored—the scale and complexity of published research make automated identification of chatbot involvement difficult and arguably not cost-effective for publishers to invest in. The irreversibility of publishing decisions and the reputational stakes further incentivize publishers to enforce transparency measures proactively rather than rely on post-publication detection.
On a theoretical level, the distinction between human scientific praxis and chatbot activity is crucial. Human researchers engage both in praxis—deliberate reflective action aimed at truth and communal understanding—and poiesis, creative production involving procedural execution. While chatbots lack the capacity for praxis, including the reflexive logos necessary for meaning-making and shared reality construction, they can mimic poietic elements embedded in scientific workflows, such as drafting, summarizing, or initial ideation. This blend holds promise for accelerating certain scientistic tasks but bears inherent risks that require clear methodological protocols to maintain scientific integrity.
The call to develop comprehensive ethical frameworks emerges as a central tenet of the authors’ argument. They advocate neither blanket bans nor laissez-faire approaches to chatbot use but rather demand rigorous transparency and accountability norms. Transparency involves explicit disclosure in scientific manuscripts concerning the identity, model version, specific usage, and purpose behind chatbot-generated content. Without such disclosures, the scientific community risks losing trust in the authenticity and reliability of its knowledge productions.
Moreover, accountability necessitates that human authors assume full responsibility for all content generated, including verifying the accuracy of chatbot outputs and properly attributing sources. This mitigates liability gaps where AI might produce errors or omissions that could otherwise go unnoticed. Since scientific publication is a social contract with future readers and collaborators, forward-looking liability frameworks emphasize the accountability of the human agent over the autonomous act of AI generation itself.
On the topic of authorship, Calderon and Herrera invoke a Platonic ideal by emphasizing that authors must not only produce texts but defend and clarify their scientific contributions. Editorial processes should therefore ensure that authors can respond adequately to questions about their work, reinforcing the irreplaceable role of human understanding behind every scholarly article. This principle safeguards the epistemic primacy of genuine human inquiry amidst increasing AI assistance.
This nuanced perspective highlights the complex interplay of ethics, epistemology, and technology reshaping scientific writing in the 21st century. It acknowledges AI’s transformative potential while rigorously defending the principles that underpin credible and responsible science. By framing chatbot integration as a polyphonic process, the study captures the dialogical tensions between mechanized language generation and human knowledge creation, underscoring the need for thoughtful management rather than uninformed exclusion or unrestricted enthusiasm.
As journals and publishers grapple with these emerging realities, policy development must keep pace with technological innovation. The study aligns with calls from other scholars for standardized reporting guidelines and editorial policies that mandate clear AI usage disclosures. Such harmonized frameworks will help mitigate risks of unfair advantage, plagiarism, and misinformation while promoting a culture of openness and trust.
In conclusion, the research by Calderon and Herrera serves as a pivotal intervention at the crossroads of technology and philosophy in science. Their insights remind us that while artificial intelligence tools offer remarkable efficiencies, the essence of scientific knowledge remains rooted in human cognition, ethics, and communal scrutiny. As the boundaries between human and machine authorships blur, balancing innovation with responsibility will define the future trajectory of scientific communication.
Subject of Research: Ethical implications and integration of chatbots in scientific research writing, focusing on transparency, responsibility, and epistemology.
Article Title: And Plato met ChatGPT: an ethical reflection on the use of chatbots in scientific research writing, with a particular focus on the social sciences.
Article References:
Calderon, R., Herrera, F. And Plato met ChatGPT: an ethical reflection on the use of chatbots in scientific research writing, with a particular focus on the social sciences.
Humanit Soc Sci Commun 12, 713 (2025). https://doi.org/10.1057/s41599-025-04650-0
Image Credits: AI Generated