Draft of the research proposal about the integration of Artificial Intelligence into educational processes in School of Advanced Studies (Tyumen, Russia).Statement of the ProblemThe integration of Artificial Intelligence (AI) into higher education presents a transformative potential for teaching and learning, yet its adoption is met with notable student resistance. This study addresses the core issue of why students at the School of Advanced Studies (SAS) reject AI tools within their courses, despite evidence of their capacity to support or even enhance academic outcomes. Initial observations reveal a paradox: in instances where AI was used overtly as an independent persona (e.g., a chatbot instructor), student reception was negative, while in contexts where AI was embedded more subtly into assignments, reactions were neutral or accepting. This disparity suggests that the format, transparency, and implementation strategy of AI are critical factors influencing student perception and adoption. The problem, therefore, extends beyond the technology itself to encompass pedagogical design, communication, and the psychosocial dynamics of the student-instructor-AI relationship. Understanding the drivers of this rejection—whether they stem from technical shortcomings, a lack of perceived utility, distrust in AI accuracy, a preference for human interaction, or institutional culture—is essential. Consequently, the study also seeks to identify the conditions and practices that can foster greater acceptance and effective use of AI among students, aiming to bridge the gap between technological innovation and learner-centered educational experience.
MethodologyThis research employs a sequential exploratory mixed-methods design to thoroughly investigate student attitudes toward AI in education. The methodology unfolded in two primary, interconnected phases. The first, qualitative phase involved conducting in-depth, semi-structured interviews with SAS professors who had firsthand experience integrating AI into their courses. These interviews, guided by a protocol focusing on goals, implementation formats, observed student reactions, and encountered challenges, provided rich contextual insights and helped frame the core issues.
The findings from these interviews directly informed the second, quantitative phase: the development and distribution of a comprehensive online survey to the broader SAS student body. The survey was designed to quantify attitudes, map experiences across different courses and AI implementations, and identify patterns in acceptance or rejection. It included demographic questions, scaled items measuring overall attitude, multiple-choice questions on specific experiences (benefits, negative aspects, technical difficulties), and open-ended items allowing for qualitative elaboration on changes in attitude and ideal future implementations.
Data analysis involves triangulating the qualitative themes from professor interviews with the quantitative and qualitative data from the student survey. Statistical analysis of survey responses will identify significant correlations (e.g., between AI format and attitude, or major and perception of utility). Thematic analysis of open-ended survey responses and interview transcripts will provide deeper explanatory context for the numerical trends. This integrated approach allows for the development of a nuanced understanding: the survey results will establish the generalizable patterns of student perception, while the interview materials will serve as a follow-up, providing illustrative depth and explaining the "why" behind the trends. The ultimate goal of the methodology is to generate evidence-based hypotheses and practical recommendations for AI integration that aligns with both pedagogical objectives and student needs.
Literature ReviewRecent empirical studies reveal a complex and often ambivalent picture of how university students perceive and adopt artificial intelligence (AI) tools in their education. While students recognize the transformative potential of AI, their acceptance is moderated by a constellation of interrelated factors. Synthesizing global research from 2020–2025, this review identifies perceived utility, trust and reliability, ethical concerns, technical fluency, institutional context, and pedagogical implementation as the primary dimensions shaping student attitudes (Alshatti Schmidt et al., 2025; Vaněček et al., 2025).
A dominant theme across studies is the critical role of perceived usefulness and ease of use. Grounded in technology acceptance models, perceived usefulness is consistently the strongest predictor of positive attitudes and behavioral intention (Ibrahim et al., 2024; Ittefaq et al., 2025). Students report valuing AI for enhancing general understanding, saving time, and personalizing learning (Dobrovská et al., 2024; Vieriu & Petrea, 2025). However, this utility is context-dependent. Students make pragmatic distinctions, deeming AI appropriate for low-stakes tasks like brainstorming or proofreading, but often viewing its use for generating graded work as ethically problematic (Farinosi & Melchior, 2025). This nuanced calculation shows acceptance is not blanket approval but a calibrated response to specific academic scenarios.
Closely tied to utility is the issue of trust, reliability, and credibility. A significant barrier to adoption is student skepticism regarding the accuracy and impartiality of AI outputs. Concerns that inaccurate or biased inputs may lead to equally flawed outputs, alongside issues of algorithmic bias and limited transparency, undermine confidence (Alshatti Schmidt et al., 2025; Sah et al., 2025). Trust is also influenced by broader digital self-efficacy; students who feel more competent and secure online tend to hold more positive attitudes toward AI (Mustofa et al., 2025). Conversely, AI-related anxiety can dampen acceptance, acting as a barrier even when students acknowledge the technology’s potential benefits (Türk et al., 2025).
Perhaps the most salient brake on unqualified acceptance is a cluster of ethical concerns, with academic integrity at the forefront. Students themselves frequently view certain AI uses as “ethically questionable,” fearing it can undermine critical thinking and constitute plagiarism (Farinosi & Melchior, 2025; Ittefaq et al., 2025). This internalized ethical guard is compounded by worries about algorithmic bias, fairness, and data privacy (Sah et al., 2025; Vaněček et al., 2025). Consequently, students across different cultural contexts express a strong desire for clear, consistent institutional policies to guide responsible use, preferring regulated integration over outright bans (Farinosi & Melchior, 2025; Vaněček et al., 2025).
Student technical fluency and AI literacy are key enablers of acceptance. Higher levels of AI knowledge and self-efficacy are positively correlated with more favorable attitudes and greater likelihood of use (Ittefaq et al., 2025; Türk et al., 2025). This underscores a critical gap: positive attitudes alone may not translate into adoption without the requisite skills. Research indicates a significant demand for formal education on AI tools, including training on prompt engineering and critical evaluation of outputs, for both students and faculty (Alshatti Schmidt et al., 2025; Vieriu & Petrea, 2025).
The institutional environment and pedagogical design ultimately frame these individual-level factors. A lack of clear guidelines and supportive leadership creates confusion and anxiety, inhibiting adoption (Alshatti Schmidt et al., 2025; Sah et al., 2025). When instructors are ambivalent or prohibitive, it signals to students that AI use is discouraged. Conversely, pedagogical designs that thoughtfully integrate AI as a learning partner for specific, well-defined tasks—particularly in low-stakes, formative contexts—can foster positive engagement and demonstrate value (Alshatti Schmidt et al., 2025). The instructor’s role remains pivotal; educator endorsement and modeling of AI use can significantly legitimize the technology for students.
Thus, student attitudes toward AI in higher education are neither uniformly positive nor negative but are shaped by a dynamic interplay of perceived benefits and significant reservations. Acceptance is highest when students perceive AI as useful for specific learning tasks, trust its outputs, operate within clear ethical and policy guidelines, possess the skills to use it effectively, and see it embedded thoughtfully by supportive instructors. The global research consensus suggests that fostering acceptance requires a holistic strategy that addresses not just the technology, but also the pedagogical, ethical, and institutional ecosystems in which it is deployed.
ReferencesAlshatti Schmidt, D., Alboloushi, B., Thomas, A., & Magalhaes, R. (2025). Integrating artificial intelligence in higher education: Perceptions, challenges, and strategies for academic innovation.
Computers and Education: Artificial Intelligence, 9, 100274.
https://doi.org/10.1016/j.caeo.2025.100274Dobrovská, D., Vaněček, D., & Yorulmaz, Y. I. (2024). Students’ attitudes towards AI in teaching and learning.
International Journal of Engineering Pedagogy (iJEP), 14(8), 88–106.
https://doi.org/10.3991/ijep.v14i8.52731Farinosi, M., & Melchior, C. (2025). To adopt or to ban? Student perceptions and use of generative AI in higher education.
Humanities and Social Sciences Communications, 12, 1684.
https://doi.org/10.1057/s41599-025-05982-7Ibrahim, F., Münscher, J.-C., Daseking, M., & Telle, N.-T. (2024). The technology acceptance model and adopter type analysis in the context of artificial intelligence.
Frontiers in Artificial Intelligence, 7, 1496518.
https://doi.org/10.3389/frai.2024.1496518Ittefaq, M., Zain, A., Arif, R., Ahmad, T., Khan, L., & Seo, H. (2025). Factors influencing international students’ adoption of generative artificial intelligence: The mediating role of perceived values and attitudes.
Journal of International Students, 15(7), 127–156.
https://doi.org/10.32674/fnwdpn48Mustofa, R. H., Kuncoro, T. G., Atmono, D., Hermawan, H. D., & Sukirman. (2025). Extending the technology acceptance model: The role of subjective norms, ethics, and trust in AI tool adoption among students.
Computers and Education: Artificial Intelligence, 8, 100379.
https://doi.org/10.1016/j.caeai.2025.100379Sah, R., Hagemaster, C., Adhikari, A., Lee, A., & Sun, N. (2025). Generative AI in higher education: Student and faculty perspectives on use, ethics, and impact.
Issues in Information Systems, 26(2), 373–386.
https://doi.org/10.48009/2_iis_129Türk, N., Batuk, B., Kaya, A., & Yıldırım, O. (2025). What makes university students accept generative artificial intelligence? A moderated mediation model.
BMC Psychology, 13, 1257.
https://doi.org/10.1186/s40359-025-03559-2Vaněček, D., Yorulmaz, Y. I., & Dobrovská, D. (2025). A holistic exploration of student attitudes toward AI use in higher education: An international comparison.
Cogent Education, 12(1), Article 2571691.
https://doi.org/10.1080/2331186X.2025.2571691Vieriu, A. M., & Petrea, G. (2025). The impact of artificial intelligence (AI) on students’ academic development.
Education Sciences, 15(3), 343.
https://doi.org/10.3390/educsci15030343End of draft
———
Key conclusions from the interview with one of the professors, who taught a class with compulsory usage of AI-personaGeneral:- AI persona was integrated into biology of human behavior courses twice, revealing challenges with accuracy and student trust.
- Incorrect AI answers caused significant student skepticism, requiring extra mediator-led curiosity-driven exploration sessions.
- AI persona functioned as a Telegram chatbot with a multi-disciplinary knowledge base but had slow response times.
- Voice interaction features were tested but discontinued due to poor reception.
- Students preferred human interaction and often relied on the mediator for clarifications.
- The mediator used visual mapping to externalize and track classroom discussions and questions.
- Experiments with AI-persona alongside human professor (not mediator) exist but students still favor human feedback.
- Internal AI-persona content was curated by experts to ensure accuracy, contrasting with external AI tools (ChatGPT, Perplexity, etc.) that pose verification challenges.
- Collaboration attempts with external AI platforms (GigaChat) encountered technical issues.
- Institutional culture significantly affects AI adoption, with SAS students more skeptical about AI than those at UTMN.
- Psychological factors like mimetic desire significantly influence student interaction with AI: SAS students tend to praise SAS professors due to their unique experience and approach different from one in school or other institutions. It can be one of the great reasons why students are skeptical about “AI-educators”.
- AI-persona usage does not seem to have great influence on academic performance, but it correlates with improved expert-like language in student responses.
- Student attitudes toward AI-persona is mostly hesitation, but sometimes interest.
- Recommendations include gradual AI integration, live interactive sessions, and fostering habitual AI use for questioning.
Course Introduction and Student Engagement- The course begins with an introductory session explaining its purpose and roles, including AI integration.
- Students initially struggle to accept AI Persona, requiring continuous encouragement and contextualization.
Experience and challenges of using AI persona in teaching biology of human behavior
- Teaching the course with AI persona occurred twice: first at Skolkovo and then at SAS as an updated version.
- A critical issue was AI persona providing incorrect or ambiguous answers, e.g., on free will, causing student distrust.
- Students struggled psychologically with AI persona as the 'top mind' and doubted its answers, impacting engagement.
- Mediator role focused on curiosity-based questioning rather than content delivery to facilitate student exploration.
- Closed reading sessions were created to help students analyze and understand AI persona responses.
- Students preferred human interaction and often sought answers from the mediator rather than the AI-persona.
- The mediator externalized thinking processes visually to help students track discussions and unresolved questions.
- Overall student attitudes towards AI-persona were rated at 3 out of 6, indicating moderate support with reservations.
Technical setup and usage of AI persona- AI persona operated as a Telegram chatbot accessible to all students during the course.
- An attempted voice interaction feature was tested but poorly received and discontinued.
- AI persona's knowledge base is composed of multiple biological disciplines, combining information to answer questions.
- Response times could be long (up to three minutes), leading to group-based question submission to manage load.
- AI persona does not access the internet but relies on preloaded scientific articles and books.
- Interactive textbook and test question features were implemented but did not work well.
- Students were unaware of the specific AI model used.
Comparison and integration of AI persona with human experts- AI persona served as the primary knowledge source with the mediator facilitating interaction but not providing content expertise.
- Presence of a human expert alongside AI persona was suggested to potentially improve student trust and engagement.
- Students showed preference for human feedback due to social and psychological factors.
Use of internal vs external AI resources and content verification- Internal AI persona was tailored with expert-approved course materials to ensure content relevance and accuracy.
- Attempts to collaborate with external AI platforms like Sber's GigaChat faced technical difficulties.
- External AI tools offer better technical capabilities but raise concerns about information verification.
Cultural and Institutional Differences in AI Adoption- Students at SAS (second year) seem to be less open to the use of AI than those of UTMN.
- SAS students' academic culture emphasizes skepticism and independent verification, affecting AI acceptance.
- The difference in AI adoption is linked to institutional culture and educational philosophies.
- Hypothesis: psychological factors like mimetic desire significantly influence student interaction with AI: SAS students tend to praise SAS professors due to their unique experience and approach different from one in school or other institutions. It can be one of the great reasons why SAS students are skeptical about “AI-educators”.
Academic Performance and AI Impact- Direct comparison of grades between AI-persona and traditional courses is difficult due to differing course structures.
- In Skolkovo, students taught with AI Persona demonstrated greater semantic closeness to expert language than those not using AI.
- AI-persona may enhance disciplinary language use and conceptual understanding.
Possible solutions that can increase acceptance of AI instruments for students. - Review and update AI-persona replies to avoid ambiguous or incorrect answers.
- Advocate for slower integration of AI-persona in courses with live, interactive, closed reading sessions. It can help students analyze and understand course materials and AI-persona answers better (done by Sofia).
- Encourage real-time questioning and analysis of AI responses to foster deeper understanding and engagement (done by Sofia).
- Limit AI-persona question submissions to one person per group to reduce response time and class tension (done by Sofia).
- Investigate and clarify technical details of AI-persona usage, including access method, AI-model, and interface for students.
- Evaluate feasibility and effectiveness of integrating external AI-platforms (e.g., Perplexity, Yandex GPT) versus internal university AI resources for student use.
- Increase the number of live sessions with close AI-persona usage to demonstrate real-time questioning and interaction processes to students.
- Develop and implement exercises of creation students’ own AI agents to foster habit of alternative information sources.