Teaching Responsible AI for First Year University Article

Teaching Responsible AI for First Year University


B
Benjamin Farenhorst Corresponding Author
Published: 07/05/2026
Keywords:generative AIfirst-year universityresponsible AI useself-regulated learningcognitive debtaction research
Share:

Generative AI tools have spread through first-year university classrooms faster than institutions have produced guidance for their use, and current responses tend to focus on detecting AI-generated work rather than teaching students how to use AI responsibly. This research-in-progress proposal reframes responsible AI use as a teachable academic skill in the first year of university. The argument builds on Haidt's (2024) account of Generation Z as overprotected in the physical world and underprotected in the digital one, and extends it to higher education: a cohort already shaped by an underregulated digital environment now meets generative AI under similarly unregulated conditions. If first-year courses are to develop the cognitive capacities they claim to develop, AI use within them needs to be regulated at the course level rather than left to individual students. The proposal also draws on cognitive offloading, self-regulated learning, retrieval practice, and recent work on cognitive debt to ground its design.

The proposed action research study introduces three mechanisms in first-year undergraduate courses: optional source restrictions that anchor AI-assisted work to course materials, in-progress comprehension check-ins that function as embedded retrieval practice, and brief personalised in-class verification activities derived from each student's own submission. A short AI literacy sequence frames these mechanisms as skill development rather than surveillance. A quasi-experimental mixed-methods design will compare structured-engagement and unstructured-access conditions, measuring critical thinking, self-regulated learning behaviours, responsible AI use, and course performance, with instructor interviews and student focus groups providing qualitative depth. No outcome data are reported. The anticipated contribution is a practice-ready protocol, deliverable through standard learning management system tools, for teaching responsible AI use as a first-year university skill.

Scroll to read the preview. Download for the complete document.

References

Abbas, M., Jam, F. A., & Khan, T. I. (2024). Generative artificial intelligence and critical thinking: Double-edged effects of information literacy and cognitive fatigue on university students. Thinking Skills and Creativity, 53, Article 101579. https://doi.org/10.1016/j.tsc.2024.101579

Bastani, H., Jones, M., Lifshitz-Assaf, H., & Webb, T. (2024). Generative AI, productivity, and learning: Evidence from a randomized field experiment. Science, 384(6693), 742–747. https://doi.org/10.1126/science.adk7899

Bjork, R. A., Dunlosky, J., & Kornell, N. (2013). Self-regulated learning: Beliefs, techniques, and illusions. Annual Review of Psychology, 64, 417–444. https://doi.org/10.1146/annurev-psych-113011-143823

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa

Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2024). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61(2), 228–239. https://doi.org/10.1080/14703297.2023.2190148

Haidt, J. (2024). The anxious generation: How the great rewiring of childhood is causing an epidemic of mental illness. Penguin Press.

Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial intelligence in education: Promises and implications for teaching and learning. Center for Curriculum Redesign.

Karpicke, J. D., & Blunt, J. R. (2011). Retrieval practice produces more learning than elaborative studying with concept mapping. Science, 331(6018), 772–775. https://doi.org/10.1126/science.1199327

Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., & Kasneci, G. (2023). ChatGPT for good? Opportunities and challenges of large language models for education. Learning and Individual Differences, 103, Article 102274. https://doi.org/10.1016/j.lindif.2023.102274

Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task [Preprint]. arXiv. https://arxiv.org/abs/2506.08872

Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence unleashed: An argument for AI in education. Pearson Education.

Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2024). Generative artificial intelligence in K–12 education: A systematic review. Frontiers in Education, 9, Article 1298457. https://doi.org/10.3389/feduc.2024.1298457

Panadero, E. (2017). A review of self-regulated learning: Six models and four directions for research. Frontiers in Psychology, 8, Article 422. https://doi.org/10.3389/fpsyg.2017.00422

Risko, E. F., & Gilbert, S. J. (2016). Cognitive offloading. Trends in Cognitive Sciences, 20(9), 676–688. https://doi.org/10.1016/j.tics.2016.07.002

Roediger, H. L., III, & Butler, A. C. (2011). The critical role of retrieval practice in long-term retention. Trends in Cognitive Sciences, 15(1), 20–27. https://doi.org/10.1016/j.tics.2010.09.003

Zimmerman, B. J. (2002). Becoming a self-regulated learner: An overview. Theory Into Practice, 41(2), 64–70. https://doi.org/10.1207/s15430421tip4102_2

B
Benjamin Farenhorst Corresponding Author

Affiliation

Doctoral Student in Artificial Intelligence, Golden Gate University

Organization

Golden Gate University

Country

Canada

0
Downloads
3
Views

Metrics are updated in real time as the article is accessed and downloaded.

Comments

Leave a Comment

Share this Article

Back to Publications
Scroll to Top