Graduate Researcher University of California Davis, California
As large language models (LLMs) like ChatGPT make their way into academic environments, a new generation of graduate students is navigating conducting research and scientific writing with an unconventional mentor: artificial intelligence. From polishing grammar to generating entire paragraphs and research protocols, these tools offer unprecedented assistance in scientific research and communication. But where is the line between acceptable support and intellectual contribution? This talk explores the complex and evolving role of LLMs in graduate education, highlighting the ethical, practical, and research challenges of AI-assisted science and writing. While avoiding AI use altogether may protect against plagiarism and preserve the integrity of authorship, it may also hinder accessibility, especially for non-native English speakers or those seeking support with basic editing and translation. Restrictions and negative views on AI also continue to stigmatize students and worsen existing inequities in academia. As AI tools become more advanced and accessible, this conversation becomes increasingly urgent. What does it mean to “write your own dissertation” in an era when human and machine-generated text are often indistinguishable? This session invites graduate students, advisors, reviewers, and institutions to reflect on authorship, mentorship, and the future of scientific writing and education in the age of artificial intelligence.