Comparing Gamified and Traditional Assessment Environments: A Quasi-Experimental Study in a University Python Course
DOI:
https://doi.org/10.24368/jates418Keywords:
Gamification, evaluation, exam, measurement, UniversityAbstract
This study examines student-performance outcomes by comparing two distinct assessment environments—a traditional paper-based exam and a complex gamified digital format—in a university-level introductory Python programming course. A quasi-experimental comparison with student self-selection was conducted at John von Neumann University (Hungary) with 63 first-year Information Technology students. Twenty-seven students took a conventional paper-based exam, while 36 completed the assessment in CodingUs, a custom-built “Among Us”-inspired web application. This gamified condition operated as a package intervention, incorporating not only game design elements but also individualized AI-generated tasks, disabled clipboard operations, and a distinct user interface. Isomorphic Python tasks were produced by an AI-assisted generation pipeline using GPT-4o-mini and GPT-4o. Performance was compared using the Mann–Whitney U test as the primary procedure, with an independent-samples t-test as a supplementary parametric analysis. The two groups did not differ significantly in mean performance (gamified: M = 63.06%, SD = 31.61; traditional: M = 68.89%, SD = 34.68; Mann–Whitney U = 423.50, p = .383; t(61) = −0.70, p = .490; Cohen’s d = −0.18; 95% CI for the mean difference [−22.61, +10.94]). While no statistically significant difference in performance was detected in this sample, the wide confidence interval and the self-selection nature of the design preclude claims of equivalence. Informal classroom observations and unsolicited student feedback offered preliminary indications of elevated engagement and favourable perceptions of the anti-cheating provisions in the gamified cohort; because no validated self-report instrument was administered, these impressions are reported as exploratory rather than confirmatory. The study contributes a replicable AI-supported pipeline for generating isomorphic programming items and motivates further research employing randomised allocation and validated measurement instruments.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 József Cserkó

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
The submitting author warrants that the submission is original and that she/he is the author of the submission
together with the named co-authors; to the extend the submission incorporates text passages, figures, data or
other material from the work of others, the submitting author has obtained any necessary permission.
Articles in this journal are published under the Creative Commons Attribution Licence (CC-BY), the author retains
the copyright. By submitting an article the author grants to this journal the non-exclusive right to publish it
(e.g., post it to an institutional repository or publish it in a book).






