- Pascal's Chatbot Q&As
- Posts
- Teachers expressed strong confidence in their content and pedagogical knowledge, and many recognized AI’s potential to enhance personalized learning and student engagement.
Teachers expressed strong confidence in their content and pedagogical knowledge, and many recognized AI’s potential to enhance personalized learning and student engagement.
However, technological knowledge—especially deep familiarity with AI’s capabilities—lagged behind. Ethical awareness was moderate, with some skepticism about AI’s fairness and transparency.
Exploring Teachers’ Perceptions of AI Integration in STEM Education through the TPACK Framework
by ChatGPT-4o
In an era where artificial intelligence (AI) is permeating nearly every sector—including education—understanding educators’ perceptions toward AI adoption is crucial, especially within the science, technology, engineering, and mathematics (STEM) disciplines. Moza Alkubaisi’s 2025 study titled “Exploring teachers’ perceptions of integrating artificial intelligence (AI) in STEM education using the TPACK framework: an exploratory case study” provides a timely and insightful investigation into this topic by focusing on a unique demographic: STEM teachers in Qatar’s secondary education system. Using the Technological Pedagogical Content Knowledge (TPACK) framework, the study explores the extent to which teachers feel equipped to incorporate AI tools into their classrooms, how their different knowledge domains interact, and what ethical considerations emerge from AI use in education.
Summary of the Study
The research was conducted via an online questionnaire distributed to 12 STEM teachers from three specialized schools in Qatar. The TPACK framework—comprised of Technological Knowledge (TK), Pedagogical Knowledge (PK), Content Knowledge (CK), and their intersections—was employed to measure how these domains affect teachers’ readiness and attitude toward AI. The study also explored ethical awareness surrounding AI, particularly regarding fairness, inclusiveness, and bias.
Overall, the findings paint a cautiously optimistic picture. Teachers expressed strong confidence in their content and pedagogical knowledge, and many recognized AI’s potential to enhance personalized learning and student engagement. However, technological knowledge—especially deep familiarity with AI’s capabilities—lagged behind. Ethical awareness was moderate, with some skepticism about AI’s fairness and transparency.
Key Findings and Contributions
1. Teachers Are Pedagogically and Content-Ready—but Not Technologically Fluent
The average scores for PK (4.4/5) and CK (4.5/5) were high, indicating strong self-efficacy in these areas. Teachers were comfortable designing student-centered activities and applying real-world content in their lessons. However, their technological knowledge (3.7/5) was only moderate. While most participants could use AI tools for basic tasks, fewer felt confident navigating their deeper functionalities. This finding mirrors earlier research in Estonia and Hong Kong, suggesting a consistent gap between general tech literacy and AI-specific competence in K-12 contexts.
2. Personalization and Student Engagement Drive Interest in AI Tools
A significant majority of participants believed AI tools could increase student engagement and enable more personalized learning pathways. Tools like Intelligent Tutoring Systems (ITS) and platforms like Duolingo were cited as examples that can adapt to individual learning styles, offer timely feedback, and free up teacher time for more strategic interventions. These capabilities align with broader pedagogical goals in STEM to foster independent learning and critical thinking.
3. TPACK Interdependencies Are Crucial
The study validated the TPACK model by revealing how limitations in one domain—technological knowledge—can hinder the effective integration of AI, despite strengths in pedagogy and content. Teachers with strong PK and CK but weaker TK may be unable to align AI tools with curricular goals or evaluate which tools best suit their classroom context. This interdependence underscores the importance of integrated professional development rather than siloed training modules.
4. Ethical Awareness Exists but Requires Strengthening
Most teachers demonstrated basic awareness of AI ethics, with 75% claiming they could spot bias and explain AI decisions. However, only 67% felt AI tools were inclusive, and one-third were neutral about fairness. This ethical ambiguity is troubling, especially given that AI systems are often opaque (”black boxes”) and susceptible to bias embedded in training data. Teachers’ hesitancy suggests the need for robust ethical training and clear guidelines to navigate the risks of data misuse, algorithmic bias, and inequitable access.
Limitations
While the study offers rich insights, its limitations—most notably its small sample size (n=12), absence of demographic data, and reliance on self-reporting—should temper broader generalizations. Moreover, the research was conducted during a time-constrained period overlapping with exam season, which may have affected response rates and depth.
Implications and Recommendations
Alkubaisi’s study holds several implications for policy, practice, and future research:
Targeted Professional Development: Training must focus on AI-specific literacy, not just general digital skills. Teachers need exposure to how AI functions, how it can support pedagogy, and how to ethically implement it.
Curricular Integration: AI should not be treated as an add-on. Instead, as Lee and Perret (2022) suggest, AI modules should be embedded into existing STEM courses to normalize its use and build familiarity among both teachers and students.
Ethics-by-Design in Training: Teachers should be trained to interrogate how AI tools make decisions, understand the datasets used, and evaluate potential risks—particularly in contexts with high cultural diversity and varying levels of access, as is the case in Qatar.
Stakeholder Involvement: Future studies should also include students, administrators, and parents to gather a holistic picture of how AI is perceived and used in schools. This broader lens would allow for better-informed and more sustainable implementation strategies.
Broader Relevance and Conclusion
This study not only contributes to the growing literature on AI in K-12 education but also opens the door for comparative research across different regions, particularly in the Global South. Qatar’s unique position—as a rapidly developing, high-income nation investing in education reform—makes it a compelling case study. However, its challenges in AI integration echo those faced in many parts of the world: uneven tech familiarity, limited ethical guidance, and infrastructure gaps.
Alkubaisi’s work is an important reminder that successful AI integration in schools isn’t merely a technical problem—it’s pedagogical, ethical, and cultural. Teachers remain the fulcrum of educational transformation. Their readiness, confidence, and critical awareness will ultimately determine whether AI becomes a tool for empowerment or a source of inequity.
