How AI-generated podcasts could transform – and complicate – research communication

By: Cassidy Delamarter, USF College of Education
As generative AI becomes increasingly woven into our daily lives, a new study highlights
a less discussed, but rapidly growing area of impact on how researchers share their
work with the public. A team of qualitative researchers in the USF College of Education
examined the promise and pitfalls of using AI-assisted podcasting tools to translate
complex findings into formats that are more accessible, conversational and far-reaching.
The project started organically after Paul Sauberer, a USF graduate student, used
AI to create an audio version of Assistant Professor Lorien Jordan’s published research.
“Hearing my own work in a podcast style, kind of blew my mind, and immediately sparked
questions about how tools like this might expand the ways we produce, interpret, and
communicate qualitative research, and that curiosity became the foundation of this
paper,” Jordan said.
In collaboration with Sauberer and Professor Jennifer Wolgemuth, Jordan experimented
with three generative AI podcasting platforms, each offering features like automated
scriptwriting and AI-generated audio. Their goal was to investigate whether these
tools could help researchers communicate findings more efficiently and effectively.
“As academic success becomes increasingly tied to digital visibility, tools like AI
podcasting may become more common,” Jordan said. “We found that AI-generated podcasting
shows real potential to expand the reach of qualitative health research, but not without
introducing new forms of labor, risks and responsibilities.”
Published in Qualitative Health Research, the study argues that effective use of AI tools depends less on the technology itself
and more on the critical engagement of the humans behind it. They found the platforms
made it easier to translate research into engaging, conversational episodes, but also
required careful editing, ethical scrutiny and an understanding of how AI systems
shape content. The team describes AI not as a shortcut, but as a collaborator whose
output must be constantly evaluated for accuracy and bias. They also recommend each
user build AI literacy to understand how generative models work, what assumptions
they carry and how to guide them responsibly.
“We invite researchers to explore new and creative ways of sharing their work, while
also reflecting on the ethics, biases and responsibilities that come with using AI
in public-facing scholarship,” Jordan said.




