Recommender Systems and Education (with Report on Practical - - PowerPoint PPT Presentation

recommender systems and education with report on
SMART_READER_LITE
LIVE PREVIEW

Recommender Systems and Education (with Report on Practical - - PowerPoint PPT Presentation

Recommender Systems and Education (with Report on Practical Experiences) Radek Pel anek This Lecture educatoinal applications with focus on relation to topics discussed so far (collaborative filtering, evaluation, ...) specific examples


slide-1
SLIDE 1

Recommender Systems and Education (with Report on Practical Experiences)

Radek Pel´ anek

slide-2
SLIDE 2

This Lecture

educatoinal applications with focus on relation to topics discussed so far (collaborative filtering, evaluation, ...) specific examples connections between seemingly different problems/techniques personalization and different types of recommendations my experience

slide-3
SLIDE 3

Motivation: Personalization in Education

each student gets suitable learning materials, exercises tailored to a particular student, adequate for his knowledge (mood, preferences, ...) mastery learning – fixed outcome, varied time (compared to classical education: fixed time, varied

  • utcome)
slide-4
SLIDE 4

Motivation: Flow, ZPD

Vygotsky, zone of proximal development

slide-5
SLIDE 5

Adaptation and Personalization in Education

... gets lot of attention: Khan Academy Duolingo MOOC courses Carnegie Learning Pearson ReasoningMind and many others

slide-6
SLIDE 6

Technology and Education

e-learning, m-learning, technology-enhanced learning, computer-based instruction, computer managed instruction, computer-based training, computer-assisted instruction, computer-aided instruction, internet-based training, flexible learning, web-based training, online education, massive open

  • nline courses, virtual education, virtual learning environments,

digital education, multimedia learning, intelligent tutoring system, adaptive learning, adaptive practice, . . .

slide-7
SLIDE 7

Recommender Systems in Technology Enhanced Learning

slide-8
SLIDE 8

Recommender Systems in Technology Enhanced Learning

slide-9
SLIDE 9

Personal recommender systems for learners in lifelong learning networks: the requirements, techniques and model

slide-10
SLIDE 10

Personal recommender systems for learners in lifelong learning networks: the requirements, techniques and model

slide-11
SLIDE 11

Education and RecSys

many techniques applicable in principle, but application more difficult than in “product recommendation” longer time frame pedagogical principles domain ontology, prerequisites learning outcomes not directly measurable

slide-12
SLIDE 12

Evaluation

evaluation even more difficult than for other recommender systems compare goals:

product recommendations: sales text (blogs, etc) recommendations: clicks (profit from advertisement) education: learning

learning can be measured only indirectly hard to tell what really works

slide-13
SLIDE 13

Examples of Techniques

adaptive educational hypermedia learning networks intelligent tutoring systems

slide-14
SLIDE 14

Adaptive Educational Hypermedia

adaptive content selection

most relevant items for particular user

adaptive navigation support

navigation from one item to other

adaptive presentation

presentation of the content

slide-15
SLIDE 15

Adaptive Educational Hypermedia

Recommender Systems in Technology Enhanced Learning

slide-16
SLIDE 16

Learning Networks

Recommender Systems in Technology Enhanced Learning

slide-17
SLIDE 17

Intelligent Tutoring Systems

interactive problem solving behavior

  • uter loop – selection/recommendation of “items”

(problems, exercises) inner loop – hints, feedback, ...

adaptation based on learner modeling knowledge modeling more involved than “taste modeling” (domain ontology, prerequisites, ...)

slide-18
SLIDE 18

Learner Modeling

  • pen learner

model instructional policy learner solves an item (question, problem) actionable insight knowledge model domain model learner modeling item pool

inner loop

  • uter loop

item selection

human-in-the-loop

Bayesian Knowledge Tracing, Logistic Models, and Beyond: An Overview of Learner Modeling Techniques

slide-19
SLIDE 19

Carnegie Learning: Cognitive Tutor

slide-20
SLIDE 20

Carnegie Learning: Cognitive Tutor

slide-21
SLIDE 21

Student Modeling and Collaborative Filtering

user ∼ student product ∼ item, problem rating ∼ student performance (correctness of answer, problem solv- ing time, number of hints taken)

slide-22
SLIDE 22

Case Studies

  • ur projects (FI MU) – “adaptive practice”

Problem Solving Tutor “Slep´ e mapy” (Map Outlines) – geography “Um´ ıme ˇ cesky/anglicky/matiku” – Czech grammar, English, math anatom.cz, matmat.cz, poznavackaprirody.cz, ...

Wayang Outpost – math ALEF – programming CourseRank – course recommender

slide-23
SLIDE 23

Problem Solving Tutor

math and computer science problems, logic puzzles performance = problem solving time model – predictions of times recommendations – problems of similar difficulty

slide-24
SLIDE 24

Problem Solving Tutor

slide-25
SLIDE 25

Tutor: predictions

tutor.fi.muni.cz

slide-26
SLIDE 26

Model of Problem Solving Times

log(T)

θ

b a c

  • 3
  • 2
  • 1

1 2

slide-27
SLIDE 27

Parameter Estimation

data: student s solved problem p in time tsp we need to estimate:

student skills θ problem parameters a, b, c

stochastic gradient descent very similar to the “SVD” collaborative filtering algorithm

slide-28
SLIDE 28

Evaluation of Predictions

20 types of problems data: 5 000 users, 8 000 hours, more than 220 000 problems difficulty of problems: from 10 seconds to 1 hour train, test set metrics: RMSE results:

significant improvement with respect to a baseline (mean times) more complex models do not bring much improvement

slide-29
SLIDE 29
slide-30
SLIDE 30

Geography: Map Outlines

adaptive practice of geography knowledge (facts) focus on prior knowledge choice of places to practice ∼ recommendation (forced)

slide-31
SLIDE 31
slide-32
SLIDE 32
slide-33
SLIDE 33
slide-34
SLIDE 34

Geography – Difficulty of Countries

slide-35
SLIDE 35

Geography – Model

Model (prior knowledge): global skill of a student θs difficulty of a country dc Probability of correct answer = logistic function (difference of skill and difficulty): P(correct|dc, θs) = 1 1 + e−(θs−dc)

slide-36
SLIDE 36

Logistic Function

1 1 + e−x

0.5 1 −6 −4 −2 2 4 6

slide-37
SLIDE 37

Geography – Model

Elo rating system (originally from chess) θ := θ + K(R − P(R = 1)) d := d − K(R − P(R = 1)) magnitude of update ∼ how surprising the result was related to stochastic gradient descent, “SVD” algorithm in collaborative filtering (but only single latent factor)

slide-38
SLIDE 38

Geography – Current Knowledge

estimation of knowledge after sequence of answers for a particular place extension of the Elo system short term memory, forgetting

slide-39
SLIDE 39

Geography – Question Selection

question selection (based on predicted probability of correct answer) ∼ item recommendation (based on predicted rating) scoring function – linear combination of several factors: predicted success rate, target success rate viewed recently how many times asked

slide-40
SLIDE 40

Geography – Multiple Choice Questions

number of options – based on estimated knowledge choice of options – ??? Example: correct answer is Hungary we need 3 distractors which countries should we use?

slide-41
SLIDE 41

Geography – Distractors

choice of options (distractors) – confused places (∼ collaborative filtering aspect)

slide-42
SLIDE 42

Geography – Evaluation

evaluation of predictions

  • ffline experiment

comparison of different models (basic Elo, extensions, ...) issue with metrics: RMSE, AUC (⇒ “Metrics for Evaluation of Student Models” paper)

evaluation of question construction (“recommendations”)

  • nline experiment, AB testing

issue with metrics: enjoyment vs learning

slide-43
SLIDE 43

AB Testing

4 groups: Target item Options adaptive adaptive adaptive random random adaptive random random

slide-44
SLIDE 44

Measuring Engagement – Survival Analysis

slide-45
SLIDE 45

Measuring Learning

we cannot measure knowledge (learning) directly estimation based on answers adaptive questions – fair comparison difficult use of “reference questions” – every 10th question is “randomly selected”

slide-46
SLIDE 46

Measuring Learning – Learning Curves

slide-47
SLIDE 47

Other AB Experiments

difficulty of questions choice of distractors (competitive vs adaptive) maximal number of distractors user control of difficulty

slide-48
SLIDE 48

AB experiments

∼ 1000 users per day sometimes minimal or no differences between experimental conditions (in the overall behaviour) reasons:

conditions not sufficiently different (differences manifest

  • nly sometimes)

disaggregation (users, context) shows differences, which cancel out in overall results

slide-49
SLIDE 49

Your Intuition?

What is suitable target difficulty of questions? Target success rate: 50 % 65 % 80 % 95 %

slide-50
SLIDE 50

Difficulty and Explicit Feedback

Out-of-school usage In-school usage

slide-51
SLIDE 51

Um´ ıme to

http://www.umimecesky.cz/ – Czech grammar and spelling http://www.umimeanglicky.cz/ – English (for Czech students) http://www.umimematiku.cz/ – math and more... https://www.umimeto.org/

slide-52
SLIDE 52

Czech Grammar – Project Evolution

initial version

target audience: adults single exercise type coarse-grained concepts focus on adaptive choice of items

current version

target audience: children more than 10 exercise types fine-grained concepts focus on mastery learning several domains

slide-53
SLIDE 53

Grammar – Basic Exercise

slide-54
SLIDE 54

Personalization: Mastery Learning

skill of the learner – estimated based on the performance, taking into account:

correctness of answers response time time intensity of items (median response time) probability of guessing

mastery criterion – comparison of skill to threshold progress bar – visualization of skill

slide-55
SLIDE 55

Um´ ıme to – Skills

slide-56
SLIDE 56

Um´ ıme to – Domain Model

“knowledge components”

abstract concepts: “capitalization rules”, “addition of fractions” taxonomy (tree)

“problem sets”

specific exercise type, set of items mapping to knowledge components

slide-57
SLIDE 57

Um´ ıme to – Recommendations

the system contains hundreds of problem sets ⇒ recommendations are useful types of recommendations: front page dashboard “default” practice within a selected exercise or topic “follow up” after reaching mastery within some problem set

slide-58
SLIDE 58

Um´ ıme to – Recommendations data

manually edited data: taxonomy of knowledge components prerequisites between knowledge components attributes of problems sets: recommended grade, ... automatically computed data – problem set relations: pred, follow similar

slide-59
SLIDE 59

Um´ ıme to – Recommendations

recent user history:

RSuc – set of successfully solved sets RUnsuc – set of unsuccessfully solved sets

homepage recommendations

follow(Rsuc) pred(RUnsuc) similar(Rsuc ∪ RUnsuc)

analogically for other recommendation situations

slide-60
SLIDE 60

Um´ ıme to – Data Analysis

“design adaptation” data ⇒ analysis ⇒ insights ⇒ revision of items or system behaviour difficulty of items survival analysis, length of practice response times item similarities

slide-61
SLIDE 61

Item Similarities and Clustering

closely related to item-item collaborative filtering item similarities: Pearson correlation of answers clustering: k-means visualization: tSNE key issue: do we have enough data?

slide-62
SLIDE 62
slide-63
SLIDE 63
slide-64
SLIDE 64

Note on Different Approaches

using data, models for: “automatic” interventions

recommendations personalization choices mastery learning

support for “manual” interventions

items behaviour system behaviour user behaviour

“asking right questions” often more important than “using sophisticated methods”

slide-65
SLIDE 65

Wayang Outpost

A Multimedia Adaptive Tutoring System for Mathematics that Addresses Cognition, Metacognition and Affect adaptive tutoring system for math Wayang Outpost → MathSpring, http://mathspring.org/ specific feature: focus on affect and metacognition

slide-66
SLIDE 66

Wayang Outpost

slide-67
SLIDE 67

Wayang Outpost: Open Learner Model

slide-68
SLIDE 68

Wayang Outpost: Affect, Metacognition

slide-69
SLIDE 69

Wayang Outpost: Affective Learning Companions

slide-70
SLIDE 70

Effort Based Tutoring

Note: Expected response (correct, hints, time) based on answers of other students ∼ collaborative filtering

slide-71
SLIDE 71

Wayang Outpost: Evaluation

slide-72
SLIDE 72

Wayang Outpost: Evaluation

slide-73
SLIDE 73

Wayang Outpost: Evaluation

slide-74
SLIDE 74

Wayang Outpost: Evaluation

slide-75
SLIDE 75

ALEF

PeWe (Personalized Web) Group at UISI FIIT STU, Bratislava adaptive education (mainly) for programming exercises

slide-76
SLIDE 76

ALEF

ALEF: A Framework for Adaptive Web-Based Learning 2.0, ˇ Simko, Barla, Bielikov´ a

slide-77
SLIDE 77

ALEF

ALEF: A Framework for Adaptive Web-Based Learning 2.0, ˇ Simko, Barla, Bielikov´ a

slide-78
SLIDE 78

ALEF

ALEF: A Framework for Adaptive Web-Based Learning 2.0, ˇ Simko, Barla, Bielikov´ a

slide-79
SLIDE 79

CourseRank

recommendations of whole courses course evaluation and planning social system ranking of courses, grade distribution, other statistics

  • riginally Stanford, later many (US) universities, out of
  • rder now

similar features e.g. in Coursera

slide-80
SLIDE 80

Summary

personalized education ↔ recommender systems many similarities specific challenges difficult evaluation