Cultivating AI-Literate Researchers

Summer Course Bridges Evidence Synthesis and Emerging Technologies

Haoyong Lan

by Sarah Bender

It’s no secret that academic research increasingly includes the use of AI.

AI tools like Scopus AI and Scite can be useful when exploring and synthesizing existing research, and using algorithms to work with data can automate repetitive tasks and increase efficiency. Generative AI can aid in brainstorming, organizing, and editing.

But research about AI literacy and its impact on education is an emerging area, and it’s rapidly evolving. When Libraries AI specialist and STEM Librarian Haoyong Lan offered his first interdisciplinary research course in summer 2025, this was the topic he wanted to give his students the chance to explore.

“I designed my course to help undergraduates cultivate their identities as researchers, while encouraging them to develop the AI literacy skills they’ll need to succeed in today’s world,” Lan explained. “Through conducting real-world, cutting-edge, interdisciplinary research, I wanted students to learn to assess and contribute to a conversation that will be increasingly relevant to them both academically and professionally.”

 

Core Literacies

Each Wednesday of the nine-week course, 22 students met virtually to dive into existing research and think about the direction of their own investigations. For many, it marked the first time they had considered AI beyond its technical foundations, thinking about responsible use rather than writing code or building models.

“I was not familiar with the term ‘AI literacy’ before the course,” said Dietrich College of Humanities and Social Sciences junior Rachel Quaye-Asamoah, who is majoring in Statistics & Machine Learning. “But I had found my peers increasingly relying on LLM technology (like OpenAI's ChatGPT, Google's Gemini, and DeepSeek) the semester prior, so the concept was something I was interested in.”

After sharing the basics of AI literacy, Lan invited his class to begin developing their own relevant research questions. They identified topics like how generative AI affects academic integrity, how universities should craft policies around AI use, and what ethical considerations are missing from current undergraduate education.

To explore these questions in the context of recent studies about AI literacy, Lan then introduced the evidence synthesis methodology. This is a comprehensive and systematic way of gathering, screening, and synthesizing existing literature to answer a research question and identify research gaps.

“The course helped me understand that systematic reviews are very important for understanding gaps in certain topics,” Quaye-Asamoah explained. “This kind of scoping review was very, very fun for me, and I would love to do more in the field, especially topics that bridge the gap between something as technical as generative AI or computer engineering and the humanities, like children's education or maybe even history.”

“Evidence synthesis is conclusive, comprehensive, and unbiased,” Lan added. “It’s used every day by policy makers, as well as researchers in social and health sciences — regardless of discipline, it’s a very sought-after skill.”

 

Moving Research Forward

For their final project, one group focused on the question: “How does the knowledge of successful prompt engineering affect information-seeking behavior and conceptual understanding for undergraduate students?”

“We were really interested in this topic, and found two major outcomes,” said Mellon College of Science junior Allison Ma, who is majoring in Mathematical Sciences. “First, the most successful strategy for prompting a generative AI agent is to use a single sentence to summarize your question.”

Research revealed that other methods, like copying and pasting a longer question from a homework assignment or sharing a list of related questions, led to a lower level of conceptual understanding in undergraduate students. In other words, simpler, more focused prompts result in deeper learning.

Her group also found a gap in the literature about the relationship between successful prompt engineering and information-seeking behavior in undergraduate students. “This could be a future research topic,” she explained.

Like Ma’s team, Quaye-Asamoah and her group found a gap as well — they explored how generative AI affects undergraduate creativity and problem-solving in fields like computer science and computer engineering, and learned that more work needs to be done in order to reveal the full impact the technology has on undergraduate education.

Lan predicts that the field will keep evolving as more researchers complete studies — and he’s planning to teach his course again in the future to keep up with the most recent literature.

“Collaborating with the Office of the Vice Provost for Education on this course was a rewarding experience for me, and I know it was rewarding for the students as well,” he said. “They went from knowing nothing about evidence synthesis review methodologies to being confident about leading an evidence synthesis project team. It’s exciting to see students become emerging scholars in such a short time, and in such a critical field.”