Science
Artificial Intelligence in Learning: Enhancing or Undermining Educational Development?

Artificial intelligence (AI) has rapidly become embedded in contemporary education, transforming how students learn, study, and demonstrate knowledge. Across bachelor’s, master’s, and doctoral levels, students increasingly rely on AI-powered tools to brainstorm ideas, clarify complex concepts, summarize academic readings, and prepare for assessments. This widespread adoption has sparked both optimism and concern. While AI promises to enhance efficiency and productivity, educators and researchers are now grappling with a fundamental question: how does AI use shape learning itself?
Learning is the foundation of long-term productivity, innovation, and professional competence. If AI alters how learners acquire skills and knowledge, its implications extend far beyond academic performance into future workforce readiness. Crucially, emerging evidence suggests that the educational impact of AI depends less on whether students use AI and more on how they use it. AI is neither inherently supportive nor inherently harmful to learning; its effects depend on how it is designed and how learners engage with it.
Forms of Artificial Intelligence in Education
AI is not a single technology but an umbrella term encompassing multiple approaches. These include rule-based systems grounded in symbolic AI, probabilistic and statistical models derived from machine learning, and neural networks employing deep learning architectures capable of capturing complex patterns in large datasets (D’Mello & Graesser, 2023; Zapata Rivera & Arslan, 2024). In recent years, generative AI—particularly large language models (LLMs)—has attracted significant attention due to its ability to generate human-like text, solve problems, and simulate dialogue.
In educational contexts, these systems function as tutors, feedback providers, writing assistants, and conversational partners. Their accessibility and responsiveness have made them appealing to students seeking immediate support. However, the pedagogical consequences of this support are not straightforward. While AI can enhance short-term performance, its effects on deeper learning and skill development are more complex.
Learning as a Driver of Long-Term Productivity
Learning involves more than task completion; it requires the gradual acquisition of knowledge, skills, and judgment through cognitive effort and practice. This process is essential for long-term productivity, particularly in a world where AI systems remain fallible. Generative AI can produce outputs that appear confident and coherent yet contain inaccuracies, biases, or fabricated information—a phenomenon often described as “hallucinations” (Hicks et al., 2024; Perković et al., 2024).
Because of these limitations, effective use of AI requires users to critically evaluate outputs, identify errors, and apply domain-specific knowledge. If learners bypass these cognitive processes, they risk becoming dependent on tools they do not fully understand, weakening their ability to perform independently when AI support is unavailable.
Learning that prioritizes short-term efficiency over cognitive effort may preserve performance in the moment while silently undermining the skill development required for sustained productivity, making the risks of AI-driven disruption more visible.
When AI Disrupts Learning
Concerns about AI undermining learning are not new. Historically, automation has often improved immediate performance while eroding underlying human skills. A frequently cited example comes from aviation, where overreliance on autopilot systems prompted regulatory bodies to recommend reduced automation use to ensure pilots retained essential manual flying skills(Federal Aviation Administration [FAA], 2013). Similar trade-offs may emerge in education when AI automates cognitive tasks central to learning.
Recent experimental research illustrates this risk. In a large-scale field experiment involving high school mathematics students, access to a generative AI tutor significantly improved problem-solving performance during practice sessions. However, when AI access was removed, students who had relied heavily on unrestricted AI performed worse than peers who never had access. These findings suggest that AI can function as a cognitive “crutch,” enabling performance gains while weakening independent skill acquisition. Importantly, learning losses were substantially reduced when AI systems were designed with pedagogical safeguards that encouraged reasoning rather than answer generation (Bastani et al.,2025).
This evidence highlights a critical insight: AI design choices matter. Tools that prioritize efficiency without considering learning processes may undermine the very skills education seeks to develop.
The ICAP Framework and Cognitive Engagement
The ICAP framework provides a useful theoretical lens for understanding how AI affects learning (Chi & Wylie, 2014). This model categorizes learning activities into four levels based on cognitive engagement:
- Passive: Receiving information without active processing (e.g., reading or listening).
- Active: Engaging in surface-level manipulation (e.g., highlighting text).
- Constructive: Generating new ideas or explanations beyond provided material.
- Interactive: Collaboratively building knowledge through dialogue and shared reasoning.
Research consistently shows that constructive and interactive engagement lead to the strongest learning outcomes. These modes require learners to explain, connect, critique, and synthesize ideas—processes central to deep understanding. AI threatens learning when it displaces these higher-level activities. If students outsource explanation, argumentation, or synthesis to AI systems, they may shift from constructive or interactive engagement to more passive modes. Lightly editing an AI-generated response may appear productive, but it bypasses the cognitive effort necessary for durable learning (Bauer et al., 2025). Over time, this pattern can erode domain knowledge, leaving learners ill-equipped to detect inaccuracies or biases in AI outputs. Bias becomes especially consequential when AI use shifts learners toward passive or surface-level engagement, reducing the likelihood that they will critically evaluate, question, or correct distorted outputs.
Bias, Hallucinations, and the Need for Domain Knowledge
AI systems, particularly LLMs, generate responses based on probabilistic patterns in training data rather than verified representations of reality. As a result, their outputs may reflect cultural, gender, or linguistic biases present in those datasets (Bender et al., 2021; Kotek et al., 2023; Tao et al., 2024). Additionally, variations in users’ language or dialect can influence the quality and fairness of generated responses (Hofmann et al., 2024).
Without sufficient domain knowledge, learners may struggle to recognize these limitations. Uncritical reliance on AI risks the internalization of biased or incorrect information, a challenge also observed in earlier interactions with online sources (Miller & Bartlett, 2012). These risks underscore the importance of maintaining strong foundational knowledge alongside AI use.
Transversal Skills and AI Literacy
Beyond subject-specific expertise, effective engagement with AI requires a set of transversal skills essential for the twenty-first century. These include critical thinking, problem-solving, information literacy, collaboration, communication, and creativity (Fiore et al., 2018; Van Laar et al., 2020). AI literacy has emerged as a particularly important component, encompassing an understanding of AI’s capabilities, limitations, ethical implications, and appropriate use (Ng et al., 2021; Yan et al., 2024).
These skills are not only necessary for navigating AI but also for addressing broader societal challenges such as misinformation, rapid technological change, and complex socio-scientific issues. Educational systems therefore face the dual challenge of integrating AI while simultaneously strengthening learners’ capacity to engage with it critically.
When AI Strengthens Learning
Despite these concerns, AI can enhance learning when used in ways that preserve cognitive engagement. Studies show that when students interact with AI as a dialogic partner—questioning, critiquing, and refining ideas—learning outcomes improve. In such cases, AI supports constructive and interactive engagement rather than replacing it. Students demonstrate stronger argumentation skills, deeper conceptual understanding, and improved critical thinking.
Here, AI functions less as a shortcut and more as a scaffold, prompting reflection and offering alternative perspectives. These benefits are most evident when learners retain responsibility for sense-making and judgment.
Principles for Learning-Supportive AI Use
To prevent learning erosion, both students and educators must adopt intentional strategies for AI use. A useful guiding principle is whether AI is doing the learning or supporting the learner’s thinking. Several practices can help maintain this balance:
First, learners should begin with their own attempts before consulting AI. Drafting an outline or initial explanation ensures there is a cognitive baseline against which AI feedback can be evaluated.
Second, AI should be used to challenge thinking rather than replace it. Asking for counterarguments, identifying weaknesses, or exploring alternative interpretations keeps the learner actively engaged.
Third, making AI engagement visible—through reflective summaries or explanations of how AI feedback was used—reinforces accountability and metacognition.
Finally, educators should clarify roles explicitly: idea generation, reasoning, and evaluation remain human responsibilities, while AI can offer examples, feedback, or perspectives to be critically assessed.
Conclusion
Artificial intelligence is already a central feature of students’ academic lives, reshaping how learning occurs across disciplines and educational levels. Its impact, however, is not inherently positive or negative. AI can either undermine or enhance learning depending on how it is designed, implemented, and used.
When AI displaces cognitive effort, it risks weakening skill acquisition and long-term productivity. When it supports constructive and interactive engagement, it can deepen understanding and foster critical thinking. Anchoring AI use within frameworks such as ICAP and prioritizing transversal skills and AI literacy offers a pathway toward responsible integration.
Ultimately, the challenge is not to restrict AI from education but to ensure it becomes a partner in learning rather than a shortcut around it. By keeping students cognitively engaged, educators can harness AI’s potential while safeguarding the core processes that make learning meaningful and enduring.
References
- Bastani, H., Bastani, O., Sungu, A., Ge, H., Kabakcı, Ö., & Mariman, R. (2025). Generative AI without guardrails can harm learning: Evidence from high school mathematics. Proceedings of the National Academy of Sciences of the United States of America, 122(26), e2422633122. https://doi.org/10.1073/pnas.2422633122
- Bauer, E., Sailer, M., Niklas, F., Greiff, S., Sarbu‐Rothsching, S., Zottmann, J. M., ... & Fischer, F. (2025). AI‐based adaptive feedback in simulations for teacher education: An experimental replication in the field. Journal of Computer Assisted Learning, 41(1), e13123. https://doi.org/10.1111/jcal.13123
- Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610–623). https://doi.org/10.1145/3442188.3445922
- Chi, M. T. H., & Wylie, R. (2014). The ICAP framework: Linking cognitive engagement to active learning outcomes. Educational Psychologist, 49(4), 219–243. https://doi.org/10.1080/00461520.2014.965823
- D’Mello, S. K., & Graesser, A. (2023). Intelligent tutoring systems: How computers achieve learning gains that rival human tutors. In Handbook of Educational Psychology (pp. 603–629). Routledge. https://doi.org/10.4324/9780429433726
- Federal Aviation Administration. (2013). Safety alert for operators: Manual flight operations (Tech. Rep. SAFO 13002).
- Fiore, S. M., Graesser, A., & Greiff, S. (2018). Collaborative problem-solving education for the twenty-first century workforce. Nature Human Behaviour, 2(6), 367–369. https://doi.org/10.1038/s41562-018-0363-y
- Hicks, M. T., Humphries, J., & Slater, J. (2024). ChatGPT is bullshit. Ethics and Information Technology, 26(2), 38. https://doi.org/10.1007/s10676-024-09775-5
- Hofmann, V., Kalluri, P. R., Jurafsky, D., & King, S. (2024). Dialect prejudice predicts AI decisions about people’s character, employability, and criminality. arXiv preprint arXiv:2403.00742. https://doi.org/10.48550/arXiv.2403.00742
- Kotek, H., Dockum, R., & Sun, D. (2023). Gender bias and stereotypes in large language models. In Proceedings of the ACM Collective Intelligence Conference (pp. 12–24). https://doi.org/10.1145/3582269.3615599
- Miller, C., & Bartlett, J. (2012). ‘Digital fluency’: Towards young people’s critical use of the internet. Journal of Information Literacy, 6(2). https://doi.org/10.11645/6.2.1714
- Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2, Article 100041. https://doi.org/10.1016/j.caeai.2021.100041
- Perković, G., Drobnjak, A., & Botički, I. (2024). Hallucinations in LLMs: Understanding and addressing challenges. In 2024 47th MIPRO ICT and Electronics Convention (MIPRO) (pp. 2084–2088). IEEE. https://doi.org/10.1109/MIPRO60963.2024.10569238
- Tao, Y., Viberg, O., Baker, R. S., & Kizilcec, R. F. (2024). Cultural bias and cultural alignment of large language models. arXiv preprint arXiv:2311.14096. https://doi.org/10.48550/arXiv.2311.14096
- Van Laar, E., Van Deursen, A. J., Van Dijk, J. A., & De Haan, J. (2020). Determinants of 21st-century skills and 21st-century digital skills for workers: A systematic literature review. SAGE Open, 10(1), 93–104. https://doi.org/10.1016/j.chb.2019.06.017
- Yan, L., Greiff, S., Teuber, Z., & Gašević, D. (2024). Promises and challenges of generative artificial intelligence for human learning. Nature Human Behaviour, 8(10), 1839–1850. https://doi.org/10.1038/s41562-024-02004-5
- Zapata-Rivera, D., & Arslan, B. (2024). Learner modeling interpretability and explainability in intelligent adaptive systems. In Mind, Body, and Digital Brains (pp. 95–109). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-58363-6_7
Test Your Knowledge!
Click the button below to generate an AI-powered quiz based on this article.
Did you enjoy this article?
Show your appreciation by giving it a like!
Conversation (0)
Cite This Article
Generating...


