Intelligent Tutoring Systems: A Meta-Analysis Meta-Analysis - - PowerPoint PPT Presentation
Intelligent Tutoring Systems: A Meta-Analysis Meta-Analysis - - PowerPoint PPT Presentation
Intelligent Tutoring Systems: A Meta-Analysis Meta-Analysis Wenting Ma March, 2011 Meta-Analysis Traditional methods of review focus on statistical significance testing Significance testing is not well suited to highly dependent on
Meta-Analysis
Traditional methods of review focus on statistical significance testing Significance testing is not well suited to
highly dependent on sample size highly dependent on sample size null finding does not carry to same “weight” as a significant finding
Meta-analysis changes the focus to the direction and magnitude of the effects across studies
Effect Size: The Key to Meta-Analysis
The effect size makes meta-analysis possible
it is the dependent variable it standardizes findings across studies such that they can be directly compared they can be directly compared
Strengths of Meta-Analysis
Imposes a discipline on the process of summing up research findings Capable of finding relationships across studies that are obscured in other approaches that are obscured in other approaches Protects against over-interpreting differences across studies Can handle a large numbers of studies (this would overwhelm traditional approaches to review)
Purpose of the Study
This review synthesizes researches on the effectiveness of intelligent tutoring systems in computer-based learning environments.
Intelligent Tutoring Systems (ITS)
It emerged as an interdisciplinary field with
- rigins in cognitive science, artificial
intelligence and education (Conati, 2009).
Theoretical Framework
Empirical studies have shown that one-to-one tutoring is a highly effective form of instruction that produces high levels of academic achievement and promotes knowledge construction (Bloom, 1984; Cohen, Kulik, & Kulik, construction (Bloom, 1984; Cohen, Kulik, & Kulik, 1982; Beck, Stern, & Haugsjaa, 1996; Corbett, 2001; Graesser, Jackson, Mathews, Mitchell, Olney, Ventura, Chipman, Franceschetti, Hu, Louwerse, Person, &TRG, 2003; Razzaq, & Heffernan, 2004).
Theoretical Framework
The purpose of ITS research is to provide the cognitive benefits of one-to-one tutoring for every child. Like human tutors, ITS are capable of Like human tutors, ITS are capable of assessing students’ knowledge, generating individualized instructions and learning activities, assisting the repair of knowledge gaps and promoting learning gains (Arnott, Hastings, & Allbritton, 2008).
Theoretical Framework
Student modeling is a fundamental component for user adaptation in ITS research that distinguishes it from non-adaptive learning environments (Mitrovic, Koedinger, learning environments (Mitrovic, Koedinger, & Martin, 2003).
Research Questions
What are the learning effects of intelligent tutoring learning environments in comparison with non-adaptive learning environments? How do these effects vary when intelligent How do these effects vary when intelligent tutors are used for learning in different knowledge domains, settings, and at educational levels? How are these effect sizes influenced by methodological features of the research?
Method
Selection Criteria
(a) They conducted research that compared how much students learned from ITS with how much they learned from non-intelligent computer-based learning
- r conventional classroom instruction.
- r conventional classroom instruction.
(b) They reported measurable cognitive
- utcomes such as recall, transfer, or a mix of both;
(c) They reported sufficient data to allow for effect size calculations; (d) They were publicly available online or in library archives.
Method
Selection of Studies
Search in the following databases including Digital Dissertations, ERIC, Springers, ACM Digital Library, Science Direct, PsycINFO, and Web of Science. Science Direct, PsycINFO, and Web of Science. The key words applied in the search include “pedagogic* agent (s)”, “intellige* tutor(s)”, “intellige* tutoring system(s)”, “intellige* cognitive tutor (s)”, “intellige* agent(s)”, and “personalized virtual learning environments”.
Method
Selection of Studies
In the initial screening phase, the abstracts of the articles were compared with criteria a, b, and d to filter out irrelevant studies. After the initial screening, the 125 articles that met the inclusion criteria were retrieved and saved for further review of the full texts. Data from the articles that met all inclusion criteria were coded using a pre-defined coding form and coding instructions developed for this meta-analysis.
Method
Selection of Studies
Finally, 24 studies (involving 1,445 participants) passed all inclusion criteria and were coded for further analyses. further analyses. All effect sizes were calculated with Hedges’ correction for bias due to small sample sizes (Lipsey & Wilson, 2001).
Distribution of Effect Sizes
Figure 1. Distribution of 24 independent effect sizes obtained from 14 articles (M = .61, SD = .49)
Table 1: Overall Weighted Mean Effect Size
Effect size 95% confidence interval Test of null Test of heterogeneity N k g SE Lower Upper z Q df p I2(%) All 1,445 24 0.68 0.05 0.57 0.78 12.50* 67.17 23 0.00 65.76
* p < .05
TABLE 2: Weighted Mean Effect Sizes by Study and Participant Characteristics
N k g SE Lower Upper z Q df p I2(%) Educational Level Elementary school (K-5) 480 8 0.60 0.09 0.42 0.78 6.45* 13.51 7 0.06 48.18 Middle school (grades 6-8) 173 Middle school (grades 6-8) 173 5 0.57 0.15 0.27 0.87 3.74* 4.78 4 0.31 16.30 Post-secondary 690 10 0.80 0.08 0.64 0.95 9.94* 44.64 9 0.00 79.84 Mixed grades 102 1 0.49 0.20 0.10 0.88 2.46* 0.00 1.00 0.00 Within-levels (Qw) 62.92 20 0.00 Between-levels (QB) 4.25 3 0.24
Subject/Domain Mathematics 238 5 0.30 0.13 0.04 0.55 2.30* 3.35 4 0.50 0.00 Computer Science 636 8 0.84 0.08 0.67 1.00 10.07* 38.47 7 0.00 81.81 Physics/Chemistry 333 7 0.65 0.11 0.43 0.87 5.77* 10.39 6 0.11 42.26 Humanities 238 N k g SE Lower Upper z Q df p I2(%) Humanities 238 4 0.71 0.13 0.45 0.97 5.38* 2.35 3 0.50 0.00 Within-levels (QB) 54.57 20 0.00 Between-levels (QB) 12.61 3 0.01
TABLE 3: Weighted Mean Effect Sizes by Study Design
Effect size (g) 95% confidence interval Test of null Test of heterogeneity N k g SE Lower Uppe r Z Q df p I2(%) Design Random assignment 1,097 18 0.83 0.06 0.70 0.95 13.16* 27.68 17 0.05 38.59 Non-random assignment 260 3 0.16 0.12
- 0.08
0.41 1.33 0.21 2 0.90 0.00 Not Reported 88 3 0.49 0.22 0.06 0.93 2.22* 15.63 2 0.00 87.21 Within-levels (Qw) 43.52 21 0.00 Between-levels (QB) 23.65 2 0.00 Setting Laboratory 965 20 0.63 0.07 0.50 0.76 9.64* 26.78 19 0.11 29.05 Classroom 480 4 0.77 0.10 0.58 0.96 8.05* 38.94 3 0.00 92.30 Within-levels (Qw) 65.72 22 0.00 Between-levels (QB) 1.45 1 0.23
TABLE 4: Weighted Mean Effect Sizes by Methodological Quality
Effect size (g) 95% confidence interval Test of null Test of heterogeneity N k g SE Lower Upper z Q df p I2(%) Confidence in effect size Confidence in effect size Low 172 3 0.40 0.15 0.10 0.70 2.59* 8.14 2 0.02 75.43 High 1,273 21 0.72 0.06 0.60 0.83 12.38* 55.29 20 0.00 63.83 Within-levels (Qw) 63.43 22 0.00 Between-levels (QB) 3.74 1 0.05 * p < .05
TABLE 4: Weighted Mean Effect Sizes by Methodological Quality
Treatment Fidelity Low 82 2 0.19 0.22
- 0.24
0.61 0.85 1.11 1 0.29 9.63 High 1,363 22 0.71 0.06 0.60 0.82 12.69* 60.63 21 0.00 65.37 Within-levels (Qw) N k g SE Lower Upper z Q df p I2(%) 61.74 22 0.00 Between-levels (QB) 5.43 1 0.02 * p < .05
Publication Source Journal 1,154 17 0.76 0.06 0.64 0.88 12.52* 39.98 16 0.00 59.98 Conference Proceeding 291 7 0.35 0.12 0.12 0.59 2.96* 17.90 60 0.01 66.47
TABLE 4: Weighted Mean Effect Sizes by Methodological Quality
N k g SE Lower Upper z Q df p I2(%) * p < .05 7 0.35 0.12 0.12 0.59 2.96* 17.90 60 0.01 66.47 Within-levels (Qw) 57.87 22 0.00 Between-levels (QB) 9.30 1 0.00
Scientific Implications
- an overall statistically detectable learning benefit for students who
learned from ITS, compared to their peers in conventional classrooms or non-adaptive computer-based learning environments.
- The learning effects produced by intelligent tutors were obtained
across a variety of subject domains and all educational levels.
- The benefits of ITS were evident in laboratory and classroom
- The benefits of ITS were evident in laboratory and classroom
settings.
- The claim that ITS are effective learning environments is consistent
with our analysis of research quality which found that the treatment fidelity of the learning environment and publication in peer-reviewed journals is positively correlated with students’ learning gains.
Possible Further Studies
What differentiate ITS from non-adaptive learning systems? Why ITS improve learning gains across studies? studies? What factors, including subject domains, level
- f participants, institutions etc, contribute
most to the learning gains?
References
- Anderson, J. R. (1993). Rules of the Mind. Hillsdale, N. J.: Erlbaum.
- Anderson, J. R. & Lebière, C. (1998). The atomic components of thought. Mahwah, NJ: Erlbaum.
- Arnott, E., Hastings, P., & Allbritton, D. (2008). Research Methods Tutor: Evaluation of a dialogue-based tutoring system in the classroom. Behavior Research Methods, 40 (3), 694-698.
- Beck, J., Stern, M., & Haugsjaa, E. (1996). Applications of AI in education. ACM Crossroads, 3(1), 11-15.
- Bloom, B. S. (1984). The 2-sigma problem: The search for methods of group instruction as effective as one-to-one tutoring. Educational Researcher, 13(6), 4-16.
Chen, C.M. (2008). Intelligent Web-based Learning System with Personalized Learning Path Guidance. Computers & Education, 51(2), 787-814.
- Cohen, P. A., Kulik, J. A., & Kulik, C. C. (1982). Educational outcomes of tutoring: A meta-analysis of findings. American Educational Research Journal, 19, 237-248.
- Conati, C. (2009). Intelligent tutoring systems: new challenges and directions. Paper presented at the Proceedings of the 21st international joint conference on Artificial intelligence.
- Conati, C., & VanLehn, K. (1999). Teaching meta-cognitive skills: implementation and evaluation of a tutoring system to guide self-explanation while learning from examples. In
proceedings of International Conference on Artificial Intelligence in Education.
- Conati, C., & Zhao, X. (2004). Building and Evaluating an Intelligent Pedagogical Agent to Improve the Effectiveness of an Educational Game. Proceedings of IUI '04, International
Conference on Intelligent User Interfaces, Island of Madeira, Portugal, p. 6-13.
- Corbett, A.T. (2001). Cognitive computer tutors: Solving the two-sigma problem. User Modeling: Proceedings of the Eighth International Conference (p. 137-147).
- Corbett, A.T., & Bhatnagar, A. (1997). Student modeling in the ACT programming tutor: Adjusting a procedural learning model with declarative knowledge, In Anthony Jameson, Ccile
Paris, and Carlo Tasso (Eds.), User Modeling: Proceedings of the Sixth International Conference, UM97 (pp. 243-254). Springer, Vienna, New York.
- Corbett, A.T., Koedinger, K.R. and Anderson, J.R. (1997). Intelligent tutoring systems. In M.G. Helander, T.K. Landauer and P. Prabhu (Eds.) Handbook of human computer interaction, 2nd
- Corbett, A.T., Koedinger, K.R. and Anderson, J.R. (1997). Intelligent tutoring systems. In M.G. Helander, T.K. Landauer and P. Prabhu (Eds.) Handbook of human computer interaction, 2nd
- edition. Amsterdam: Elsevier Science.
- Graesser, A.C., Jackson, G.T., Mathews, E.C., Mitchell, H.H., Olney, A., Ventura, M., Chipman, P., Franceschetti, D., Hu, X., Louwerse, M.M., Person, N.K., & TRG (2003). Why/AutoTutor: A
test of learning gains from a physics tutor with natural language dialog. In R. Alterman & D. Hirsh (Eds.), Proceedings of the 25th Annual Conference of the Cognitive Science Society (pp. 1- 5). Boston, MA: Cognitive Science Society.
- Mills, C., & Dalgarno, B. (2007). A conceptual model for game-based intelligent tutoring systems. Paper presented at ACILITE 2007, Singapore. Retrieved June 15, 2010, from
www.ascilite.org.au/conferences/singapore07/procs/mills.pdf
- Mitrovic, A., Koedinger, K.R., & Martin, B. (2003). A Comparative Analysis of Cognitive Tutoring and Constraint-Based Modeling. Johnstown, PA, USA: User Modeling 2003: 9th
International Conference (UM 2003), 22-26 June 2003. Lecture Notes in Computer Science, 2702, 313-322.
- Mitrovic, A., & Djordjevic-Kajan, S. (1995). Interactive reconstructive student modeling: A machine-learning approach. International Journal of Human-Computer Interaction, 7(4), 385 -
401.
- Nicholas, A., & Martin, B. (2008). Merging Adaptive Hypermedia and Intelligent Tutoring Systems Using Knowledge Spaces. Adaptive Hypermedia and Adaptive Web-Based Systems (pp.
426-430).
- Ohlsson, S. (1994). Constraint Based User Modeling. In J. E. Greer & G. McCalla (Eds.) Student Modeling: The Key to Individualized Knowledge-Based Instruction Springer-Verlag. Berlin
- Germany. (pp. 167-189).
- Razzaq, L.M., & Heffernan, N. T. (2004). Tutorial dialog in an equation solving intelligent tutoring system . In J.C. Lester, R.M. Vicari, & F. Parguacu (Eds.), Proceedings of 7th Annual
International Intelligent Tutoring Systems Conference, Berlin: Springer-Verlag. pp. 851-853.
- Roll, I., Baker, R. S., Aleven, V., & Koedinger, K. (2004). A Metacognitive ACT-R Model of Students' Learning Strategies in Intelligent Tutoring Systems. In Proceedings the Seventh
International Conference of Intelligent Tutoring Systems (pp. 854-856) Lecture Notes in Computer Science 3220. Berlin: Springer Verlag.
- Suraweera, P., & Mitrovic, A. (2002). KERMIT: A constraint-based tutor for database modeling. Biarritz, France: 6th International Conference on Intelligent Tutoring Systems ITS 2002, 2-7
Jun 2002. Lecture Notes in Computer Science, 2363, 377-387.
- Vsiriga, V., & Virvou, M. (2004). Evaluating the intelligent features of a web-based intelligent computer assisted language learning system. International Journal on Artificial Intelligence
Tools, 13(2), 411-425.
- Viswanathan, K., Rob, R. W., & David, R. (2005). A Comparison of Model-Tracing and Constraint-Based Intelligent Tutoring Paradigms. International Journal of Artificial Intelligence in
Education, 15(2), 117-144.
- Wang, N., Johnson, W. L., Mayer, R. E., Rizzo, P., Shaw, E., & Collins, H. (2008). The politeness effect: Pedagogical agents and learning outcomes. International Journal of Human Computer
Studies, 66, 96-112.