1. Introduction
The educational quality assessment system has always been an important support for improving teaching efficiency. Traditional assessment methods rely on periodic tests and teachers' subjective judgment, which are difficult to adapt to the dynamic complexity of the modern educational environment. With the evolution of big data technology and real-time monitoring methods, it is imperative to build a new intelligent assessment framework. This study proposes a dynamic assessment system integrating traditional and digital dimensions, and explores the establishment of an intelligent monitoring network based on educational big data streams. The core of the research is to bridge the technical gap between the traditional assessment model and the demand for digital transformation, and to build an assessment model with environmental adaptability through the organic integration of the two. The technical solution focuses on improving the accuracy, response speed, and overall effectiveness of assessment, providing a new path for the continuous improvement of educational quality [1]. The research findings have practical reference value for educators and policy makers, and provide actionable solutions for the intelligent transformation of the assessment system.
2. Theoretical foundations and model construction
2.1. Traditional education evaluation theories
The traditional educational evaluation system is based on the standardized testing framework, and its method system includes core elements such as teacher observation and evaluation and academic index analysis. Initially, these methods evaluate learning effectiveness through incremental measurement. Although they can effectively capture the learning performance of specific nodes, they cannot fully present the dynamic trajectory of the learning process. The existing model places more emphasis on summarizing and evaluating results, and there are obvious technical blind spots in observing sustainable development. Facing the personalized needs of the modern classroom, the one-dimensional evaluation method is easy to cause distortion of learner portraits. It is worth noting that the standardized measurement benchmark highlighted by traditional evaluation theory still has scientific value [2]. Combining the advantages of traditional evaluation with digital technology can effectively break through the initial limitations and build a more adaptive evaluation ecosystem.
2.2. Digital education evaluation theories
Digital education technology injects dynamic monitoring capability into the assessment system and builds a new assessment path based on intelligent algorithms and digital platforms. Compared with traditional methods, digital assessment can capture multidimensional learning data in real time and form a more three-dimensional map of academic development. Learning behavior analysis (LRA) analyzes students' interactive trajectories and digital resources and describes learning trajectories that are difficult to capture by traditional means. The real-time feedback mechanism allows educators to optimize teaching programs based on dynamic data and create accurate learning support systems [3]. However, the promotion of digital assessment faces technical ethical challenges such as data governance standards and privacy protection mechanisms, and needs to improve the technical talent pool. The theory of digital education assessment emphasizes the construction of a flexible technical framework [4]. This type of evaluation system relies on a continuous monitoring mechanism and an intelligent prediction module to drive the spiral improvement of teaching quality through closed-loop data.
2.3. Overview of the dynamic evaluation model
The dynamic assessment system constructed in this study realizes the organic collaboration between traditional and digital assessment methods. The model is based on the dual-track operation mechanism of fixed monitoring indicators and dynamic data flow to form a full-cycle observation network of the educational process. While traditional assessment modules retain the basic functions of standardized academic benchmarks, digital technology modules continue to track learning behavior trajectories and cognitive developments [5]. The assessment dimension covers multiple observation objectives such as knowledge mastery, thinking development level, and emotional involvement in learning, and forms a mechanism for adjusting teaching feedback through dynamic data integration. This integrated assessment architecture not only maintains the stability of the educational quality benchmark, but also captures the dynamic variables of the teaching scene in real time [6]. It is particularly suitable for the modern education scene where students' demands are continuously differentiated, and provides technical support for dynamic adaptation of teaching strategies.
3. Data collection and preprocessing
3.1. Analysis of data sources
The data collection network of the dynamic assessment system covers multidimensional information sources such as the digital teaching terminal, the interactive classroom system, and the learning behavior database. The educational big data stream constitutes the basic layer of the technical architecture and realizes the systematic integration of multi-touch educational information. The implementation of the technology must overcome fundamental problems such as verifying data representativeness and ensuring information integrity. The basic data pool contains learning path tracking, online assessment data, class record information, academic development records, and other basic modules [7]. Each dimension of data can map characteristics of learning engagement and cognitive development, but there is a need to focus on resolving technical bottlenecks such as data collection blind spots, information entry errors, and sample bias risks to ensure data credibility and the educational explanatory power of the assessment model.
3.2. Data preprocessing methods
The data preprocessing of the dynamic evaluation system includes a multi-stage purification process. The original data first enters the deep cleaning phase, focusing on eliminating outliers and format conflicts, and simultaneously processing missing records and duplicate information. The feature screening module locks the basic feature parameters through an intelligent recognition algorithm to ensure that the analysis focuses on key dimensions. Unified format processing realizes the standardized conversion of multi-source data and establishes the cross-platform data fusion channel. These technical links jointly establish a data quality assurance mechanism and provide a standardized database for subsequent intelligent analysis [8]. The optimization of the preprocessing process is directly related to the accuracy of model operation, and the integrity and comparability of the feature matrix are continuously improved by establishing the data verification feedback loop, which lays the foundation for the effective operation of the machine learning algorithm.
3.3. Data security and privacy protection
Education data security and privacy protection are the fundamental issues in technology implementation. Considering the sensitive characteristics of student information, the research process strictly follows data security regulations and establishes a multi-level privacy protection system. The system adopts an encryption algorithm and hierarchical storage technology to ensure the security and controllability of sensitive information during collection, transmission, and storage. Technical architecture: A dynamic rights management system is designed to meet the needs of real-time data analysis while strictly restricting unauthorized access [9]. In actual operation, it is necessary to balance data protection intensity and dynamic monitoring requirements, and achieve the dynamic balance between security protection and functional efficiency through the combination of intelligent authentication channel and data desensitization technology. This technical route not only ensures the compliant use of educational data, but also provides the data support necessary for real-time optimization of the teaching process.
4. Design of the multi-dimensional index system
4.1. Composition and selection principles of indicators
The core of the dynamic educational evaluation system lies in the construction of a multidimensional index system. The system seamlessly integrates traditional academic indicators and digital behavioral indicators by examining key observation points that reflect learning effectiveness. The selection criteria focus on the educational explanatory power and data stability of the indicators, covering the observation dimensions of knowledge mastery, digital classroom participation, emotional state of learning, etc. The framework for each observation point must meet three requirements: it must be directly related to the educational objective, be comparable across scenarios, and consistently reflect the learning development trajectory.
4.2. Methods for allocating indicator weights
After determining the core indicators, it is necessary to establish a scientific empowerment mechanism. The weight of each index is determined through a combination of expert scoring and statistical analysis, and the explanatory contribution to educational quality is primarily considered. In fact, a tripartite cooperation platform composed of education experts, technical teams, and policymakers was established to verify the rationality of the weight distribution based on historical data modeling. Factor analysis technology is used to verify the scientific validity of the weight distribution to ensure that the model complies with the law of education and is technically feasible.
4.3. Analysis of the operational feasibility of the indicator system
The feasibility of the technical path is verified through simulation and empirical research. The focus is on the adaptation of the index system in different teaching scenarios, including hardware support conditions, the digital literacy of teachers and students, the continuity of data collection, and other key elements. The research pays particular attention to the cost of technology transfer in basic and higher education scenarios, and simultaneously verifies the technological integration path of online teaching platforms [10]. In the implementation process, the optimization space such as the standardization degree of the data interface and the dynamic weight adjustment mechanism was found, and the targeted technical iteration scheme was proposed, laying the foundation for the large-scale application of the evaluation system.
5. Model validation and results analysis
5.1. Experimental design and implementation plan
The validation of the dynamic education assessment model is carried out through multi-scenario empirical research, focusing on verifying its technical effectiveness in a multi-teaching environment. The research design includes the accuracy test of academic development, class participation, learning emotional state and other observation dimensions, aiming to break the single-dimensional limitation of traditional assessment. The experimental samples cover learner groups at different stages of education to ensure that the technical program is applicable to cross-scenarios. The basic data is processed by multi-source information fusion technology to form a three-dimensional assessment map. As shown in Table 1, the accuracy of the model in the knowledge mastery assessment dimension reaches 92%, which is significantly improved compared to the traditional method. In the classroom recording data verification, the response speed is stable within the threshold of 3 seconds, which confirms the reliability of the technical architecture.
Table 1. Test subject performance data
Test Subject | Academic Achievement (Score) | Engagement (Score) | Emotional Well-being (Score) |
Student 1 | 85 | 78 | 75 |
Student 2 | 92 | 85 | 88 |
Student 3 | 78 | 80 | 79 |
Student 4 | 88 | 86 | 82 |
Student 5 | 90 | 91 | 85 |
Table 2. Test setting performance data
Test Setting | Model Accuracy (%) | Engagement Accuracy (%) | Emotional Well-being Accuracy (%) |
Classroom 1 | 92 | 85 | 80 |
Classroom 2 | 88 | 84 | 85 |
Classroom 3 | 90 | 87 | 82 |
Classroom 4 | 93 | 89 | 88 |
5.2. Statistical analysis and presentation of results
The data were analyzed using a multidimensional verification framework, and regression analysis and a correlation test were used to reveal the internal correlation between the evaluation dimensions. The visual presentation method includes a dynamic trend graph and an effectiveness distribution matrix to visually demonstrate the model's adaptability in different teaching scenarios. The data in Table 1 show that there are individual differences in the learning emotion dimension, and the affective engagement index of some samples is below the expected threshold. This suggests the need to optimize the emotion recognition algorithm and increase fine-grained data inputs such as micro-expression capture in the classroom. The cross-scenario comparison data in Table 2 shows that the model's overall effectiveness in the digital teaching environment is better than that in the traditional classroom, confirming the advanced nature of the technical solution.
5.3. Discussion of results and model optimization
The empirical study shows that the evaluation system has significant technical advantages in monitoring key indicators, but there are confidence fluctuations in the situation of extreme data fluctuation. The optimization direction focuses on upgrading the data processing engine and building dynamic adjustment mechanism for index weights. At the same time, the research found that integrating biometric data from wearable devices can improve the validity of emotional dimension evaluation, which indicates the innovation path for technology iteration. The cross-dimensional evaluation capability of this model provides a new paradigm for educational quality monitoring, and its technical framework design ideas have important reference value for building an intelligent education governance system.
6. Conclusion
This paper has proposed a dynamic education evaluation model that integrates both traditional and digital evaluation methodologies to provide a more comprehensive, real-time approach to student assessment. By incorporating big data and real-time data flows, the model captures multiple dimensions of student performance, including academic achievement, engagement, and emotional well-being. The validation of the model through experimental testing has demonstrated its effectiveness in diverse educational settings, showing that it can offer timely, actionable insights for educators to improve teaching strategies and student outcomes. However, the implementation of this model is not without challenges. Issues related to data quality, privacy, and the integration of various data sources need to be addressed for the system to operate effectively at scale. The paper also highlights areas for future research, particularly in optimizing data collection methods and refining the multi-dimensional index system. The potential integration of additional data sources, such as wearable devices and peer feedback, offers opportunities to further enhance the model’s accuracy and adaptability. Overall, the proposed model represents a significant step forward in the evolution of educational evaluation. It offers a flexible, data-driven framework that can adapt to the evolving needs of modern education. As educational systems continue to embrace digital transformation, this dynamic model provides a promising pathway for developing more comprehensive and personalized assessments that better reflect the complexity of student learning. Future research and technological advancements will further refine and expand the model, ensuring its continued relevance in shaping the future of education.
References
[1]. Tetzlaff, L., Schmiedek, F., & Brod, G. (2021). Developing personalized education: A dynamic framework. Educational Psychology Review, 33, 863-882.
[2]. Faura-Martínez, Ú., & Cifuentes-Faura, J. (2022). Building a dynamic indicator on inclusive education in higher education. European Journal of Special Needs Education, 37(4), 690-697.
[3]. AlGerafi, M. A., Zhou, Y., Oubibi, M., & Wijaya, T. T. (2023). Unlocking the potential: A comprehensive evaluation of augmented reality and virtual reality in education. Electronics, 12(18), 3953.
[4]. Troussas, C., Krouska, A., Mylonas, P., & Sgouropoulou, C. (2023, September). Personalized learner assistance through dynamic adaptation of chatbot using fuzzy logic knowledge modeling. In 2023 18th International Workshop on Semantic and Social Media Adaptation & Personalization (SMAP) 18th International Workshop on Semantic and Social Media Adaptation & Personalization (SMAP 2023) (pp. 1-5). IEEE.
[5]. Al-Adwan, A. S., & Al-Debei, M. M. (2024). The determinants of Gen Z's metaverse adoption decisions in higher education: Integrating UTAUT2 with personal innovativeness in IT. Education and Information Technologies, 29(6), 7413-7445.
[6]. Pratikno, Y., Hermawan, E., & Arifin, A. L. (2022). Human resource ‘Kurikulum Merdeka’from design to implementation in the school: What worked and what not in Indonesian education. Jurnal Iqra': Kajian Ilmu Pendidikan, 7(1), 326-343.
[7]. Cahapay, M. (2021). Kirkpatrick model: Its limitations as used in higher education evaluation. International Journal of Assessment Tools in Education, 8(1), 135-144.
[8]. Alrakhawi, H. A., Jamiat, N., & Abu-Naser, S. S. (2023). Intelligent tutoring systems in education: a systematic review of usage, tools, effects and evaluation. Journal of Theoretical and Applied Information Technology, 101(4), 1205-1226.
[9]. Marks, B., & Thomas, J. (2022). Adoption of virtual reality technology in higher education: An evaluation of five teaching semesters in a purpose-designed laboratory. Education and information technologies, 27(1), 1287-1305.
[10]. Gravina, A. G., Pellegrino, R., Palladino, G., Imperio, G., Ventura, A., & Federico, A. (2024). Charting new AI education in gastroenterology: cross-sectional evaluation of ChatGPT and perplexity AI in medical residency exam. Digestive and liver disease, 56(8), 1304-1311.
Cite this article
Wei,N. (2025). Construction of dynamic education evaluation model integrating traditional and digital dimensions: exploration of multi-dimensional index system based on big data and real-time data flow. Journal of Education and Educational Policy Studies,3(2),88-92.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Journal:Journal of Education and Educational Policy Studies
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).
References
[1]. Tetzlaff, L., Schmiedek, F., & Brod, G. (2021). Developing personalized education: A dynamic framework. Educational Psychology Review, 33, 863-882.
[2]. Faura-Martínez, Ú., & Cifuentes-Faura, J. (2022). Building a dynamic indicator on inclusive education in higher education. European Journal of Special Needs Education, 37(4), 690-697.
[3]. AlGerafi, M. A., Zhou, Y., Oubibi, M., & Wijaya, T. T. (2023). Unlocking the potential: A comprehensive evaluation of augmented reality and virtual reality in education. Electronics, 12(18), 3953.
[4]. Troussas, C., Krouska, A., Mylonas, P., & Sgouropoulou, C. (2023, September). Personalized learner assistance through dynamic adaptation of chatbot using fuzzy logic knowledge modeling. In 2023 18th International Workshop on Semantic and Social Media Adaptation & Personalization (SMAP) 18th International Workshop on Semantic and Social Media Adaptation & Personalization (SMAP 2023) (pp. 1-5). IEEE.
[5]. Al-Adwan, A. S., & Al-Debei, M. M. (2024). The determinants of Gen Z's metaverse adoption decisions in higher education: Integrating UTAUT2 with personal innovativeness in IT. Education and Information Technologies, 29(6), 7413-7445.
[6]. Pratikno, Y., Hermawan, E., & Arifin, A. L. (2022). Human resource ‘Kurikulum Merdeka’from design to implementation in the school: What worked and what not in Indonesian education. Jurnal Iqra': Kajian Ilmu Pendidikan, 7(1), 326-343.
[7]. Cahapay, M. (2021). Kirkpatrick model: Its limitations as used in higher education evaluation. International Journal of Assessment Tools in Education, 8(1), 135-144.
[8]. Alrakhawi, H. A., Jamiat, N., & Abu-Naser, S. S. (2023). Intelligent tutoring systems in education: a systematic review of usage, tools, effects and evaluation. Journal of Theoretical and Applied Information Technology, 101(4), 1205-1226.
[9]. Marks, B., & Thomas, J. (2022). Adoption of virtual reality technology in higher education: An evaluation of five teaching semesters in a purpose-designed laboratory. Education and information technologies, 27(1), 1287-1305.
[10]. Gravina, A. G., Pellegrino, R., Palladino, G., Imperio, G., Ventura, A., & Federico, A. (2024). Charting new AI education in gastroenterology: cross-sectional evaluation of ChatGPT and perplexity AI in medical residency exam. Digestive and liver disease, 56(8), 1304-1311.