Learning Objectives At the end of the class you should be able to: - - PowerPoint PPT Presentation

learning objectives
SMART_READER_LITE
LIVE PREVIEW

Learning Objectives At the end of the class you should be able to: - - PowerPoint PPT Presentation

Learning Objectives At the end of the class you should be able to: Explain the components and the architecture of a learning problem Explain why a learner needs a bias Identify the sources of error for a prediction D. Poole and A. Mackworth


slide-1
SLIDE 1

Learning Objectives

At the end of the class you should be able to: Explain the components and the architecture of a learning problem Explain why a learner needs a bias Identify the sources of error for a prediction

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 1

slide-2
SLIDE 2

Learning

Learning is the ability to improve one’s behavior based on experience. The range of behaviors is expanded: the agent can do more. The accuracy on tasks is improved: the agent can do things better. The speed is improved: the agent can do things faster.

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 2

slide-3
SLIDE 3

Components of a learning problem

The following components are part of any learning problem: task The behavior or task that’s being improved. For example: classification, acting in an environment data The experiences that are being used to improve performance in the task. measure of improvement How can the improvement be measured? For example: increasing accuracy in prediction, new skills that were not present initially, improved speed.

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 3

slide-4
SLIDE 4

Black-box Learner

Model(s) Learner Reasoner Experiences/ Data Background knowledge/ Bias Problem/ Task Answer/ Performance

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 4

slide-5
SLIDE 5

Learning architecture

Model(s) Learner Reasoner Experiences/ Data Background knowledge/ Bias Problem/ Task Answer/ Performance

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 5

slide-6
SLIDE 6

Common Learning Tasks

Supervised classification Given a set of pre-classified training examples, classify a new instance.

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 6

slide-7
SLIDE 7

Common Learning Tasks

Supervised classification Given a set of pre-classified training examples, classify a new instance. Unsupervised learning Find natural classes for examples.

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 7

slide-8
SLIDE 8

Common Learning Tasks

Supervised classification Given a set of pre-classified training examples, classify a new instance. Unsupervised learning Find natural classes for examples. Reinforcement learning Determine what to do based on rewards and punishments.

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 8

slide-9
SLIDE 9

Common Learning Tasks

Supervised classification Given a set of pre-classified training examples, classify a new instance. Unsupervised learning Find natural classes for examples. Reinforcement learning Determine what to do based on rewards and punishments. Analytic learning Reason faster using experience.

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 9

slide-10
SLIDE 10

Common Learning Tasks

Supervised classification Given a set of pre-classified training examples, classify a new instance. Unsupervised learning Find natural classes for examples. Reinforcement learning Determine what to do based on rewards and punishments. Analytic learning Reason faster using experience. Inductive logic programming Build richer models in terms

  • f logic programs.

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 10

slide-11
SLIDE 11

Common Learning Tasks

Supervised classification Given a set of pre-classified training examples, classify a new instance. Unsupervised learning Find natural classes for examples. Reinforcement learning Determine what to do based on rewards and punishments. Analytic learning Reason faster using experience. Inductive logic programming Build richer models in terms

  • f logic programs.

Statistical relational learning learning relational representations that also deal with uncertainty.

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 11

slide-12
SLIDE 12

Example Classification Data

Training Examples: Action Author Thread Length Where e1 skips known new long home e2 reads unknown new short work e3 skips unknown

  • ld

long work e4 skips known

  • ld

long home e5 reads known new short home e6 skips known

  • ld

long work New Examples: e7 ??? known new short work e8 ??? unknown new short work We want to classify new examples on feature Action based on the examples’ Author, Thread, Length, and Where.

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 12

slide-13
SLIDE 13

Feedback

Learning tasks can be characterized by the feedback given to the learner. Supervised learning What has to be learned is specified for each example.

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 13

slide-14
SLIDE 14

Feedback

Learning tasks can be characterized by the feedback given to the learner. Supervised learning What has to be learned is specified for each example. Unsupervised learning No classifications are given; the learner has to discover categories and regularities in the data.

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 14

slide-15
SLIDE 15

Feedback

Learning tasks can be characterized by the feedback given to the learner. Supervised learning What has to be learned is specified for each example. Unsupervised learning No classifications are given; the learner has to discover categories and regularities in the data. Reinforcement learning Feedback occurs after a sequence

  • f actions.

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 15

slide-16
SLIDE 16

Measuring Success

The measure of success is not how well the agent performs on the training examples, but how well the agent performs for new examples.

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 16

slide-17
SLIDE 17

Measuring Success

The measure of success is not how well the agent performs on the training examples, but how well the agent performs for new examples. Consider two agents:

◮ P claims the negative examples seen are the only

negative examples. Every other instance is positive.

◮ N claims the positive examples seen are the only positive

  • examples. Every other instance is negative.

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 17

slide-18
SLIDE 18

Measuring Success

The measure of success is not how well the agent performs on the training examples, but how well the agent performs for new examples. Consider two agents:

◮ P claims the negative examples seen are the only

negative examples. Every other instance is positive.

◮ N claims the positive examples seen are the only positive

  • examples. Every other instance is negative.

Both agents correctly classify every training example, but disagree on every other example.

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 18

slide-19
SLIDE 19

Bias

The tendency to prefer one hypothesis over another is called a bias.

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 19

slide-20
SLIDE 20

Bias

The tendency to prefer one hypothesis over another is called a bias. Saying a hypothesis is better than N’s or P’s hypothesis isn’t something that’s obtained from the data.

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 20

slide-21
SLIDE 21

Bias

The tendency to prefer one hypothesis over another is called a bias. Saying a hypothesis is better than N’s or P’s hypothesis isn’t something that’s obtained from the data. To have any inductive process make predictions on unseen data, an agent needs a bias.

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 21

slide-22
SLIDE 22

Bias

The tendency to prefer one hypothesis over another is called a bias. Saying a hypothesis is better than N’s or P’s hypothesis isn’t something that’s obtained from the data. To have any inductive process make predictions on unseen data, an agent needs a bias. What constitutes a good bias is an empirical question about which biases work best in practice.

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 22

slide-23
SLIDE 23

Learning as search

Given a representation, data, and a bias, the problem of learning can be reduced to one of search. Learning is search through the space of possible representations looking for the representation or representations that best fits the data, given the bias.

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 23

slide-24
SLIDE 24

Learning as search

Given a representation, data, and a bias, the problem of learning can be reduced to one of search. Learning is search through the space of possible representations looking for the representation or representations that best fits the data, given the bias. These search spaces are typically prohibitively large for systematic search. E.g., use gradient descent or stochastic simulation. A learning algorithm is made of a search space, an evaluation function, and a search method.

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 24

slide-25
SLIDE 25

Data

Data isn’t perfect:

◮ the features given are inadequate to predict the

classification

◮ there are examples with missing features ◮ some of the features are assigned the wrong value c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 25

slide-26
SLIDE 26

Data

Data isn’t perfect:

◮ the features given are inadequate to predict the

classification

◮ there are examples with missing features ◮ some of the features are assigned the wrong value

  • verfitting occurs when distinctions appear in the training

data, but not in the unseen examples.

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 26

slide-27
SLIDE 27

Errors in learning

Errors in learning are caused by: Limited representation (representation bias)

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 27

slide-28
SLIDE 28

Errors in learning

Errors in learning are caused by: Limited representation (representation bias) Limited search (search bias)

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 28

slide-29
SLIDE 29

Errors in learning

Errors in learning are caused by: Limited representation (representation bias) Limited search (search bias) Limited data (variance)

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 29

slide-30
SLIDE 30

Errors in learning

Errors in learning are caused by: Limited representation (representation bias) Limited search (search bias) Limited data (variance) Limited features (noise)

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 30

slide-31
SLIDE 31

Choosing a representation for models

The richer the representation, the more useful it is for subsequent problem solving. The richer the representation, the more difficult it is to learn. “bias-variance tradeoff”

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 31

slide-32
SLIDE 32

Characterizations of Learning

Find the best model given the data. Delineate the class of consistent models given the data. Find a probability distribution of the models given the data.

c

  • D. Poole and A. Mackworth 2016

Artificial Intelligence, Lecture 7.1, Page 32