Summary & Questions Weria and Kai I NF3490/ 4490 Exam Format: - - PowerPoint PPT Presentation
Summary & Questions Weria and Kai I NF3490/ 4490 Exam Format: - - PowerPoint PPT Presentation
INF3490/4490 Biologically inspired computing Summary & Questions Weria and Kai I NF3490/ 4490 Exam Format: Written/Digital (see small example at uio.inspera.no) When: November 30, at 09:00 (4 hours) Closed book exam :
I NF3490/ 4490 Exam
- Format: Written/Digital (see small example at
uio.inspera.no)
- When: November 30, at 09:00 (4 hours)
- “Closed book exam”: No materials are
permitted on the exam
- Location: See StudentWeb and
http://www.uio.no/studier/emner/matnat/ifi/INF3490 /h18/eksamen/index.html
- http://www.uio.no/studier/emner/matnat/ifi/INF4490
/h18/eksamen/index.html
- Same exam in INF4490 as in INF3490
Multiple-choice Questions on Parts of the Exam
3
INF3490/4490 — Biologically Inspired Computing November 30th, 2017 Exam hours: 09:00 – 13:00 Permitted materials: None
The course teachers will visit the exam room at least once during the exam.
The exam text consists of problems 1-40 (multiple choice questions) to be answered by selecting true or false for each statement. If you think a statement could be either true or false, consider the most likely use/case. Problems 41-43 are answered by entering text. Problems 1-40 have a total weight of 80%, while problems 41-43 have a weight of 20%. Scoring in multiple choice questions Each problem has a variable number of true statements, but there is always at least
- ne true and one false statement for each problem. If you think a statement could be
either true or false, consider the most likely use/case. 0.5 point is given for each correctly marked statement. Further, an incorrectly marked statement or an unmarked statement(s) results in 0 point. The maximum score for a question is 2 points and the minimum is 0. It will be compensated through grade thresholds adjustments for the lack of negative points (to adjust for the opportunity to get a positive score by random answering).
Most likely use/case
- If you think a statement could be either true
- r false, consider the most likely use/case
- Example: “Evolutionary algorithms maintain a
population of candidate solutions”
– May be False for certain specific EAs – However, main focus in our class has been on EAs with a population
4
Multiple-choice Questions in Digital Exam
5
Digital exam: Text reply questions
6
We prefer answers on text problems in English language
but we would naturally not reduce the score due to spelling errors as long as the understanding seems to be correct like e.g.
- ” the bias node multiplied with it's respectful
weights is used to calculate the activation function in the first hidden layer.”
- Answer briefly and with structured and
formatted text.
7
Example: Ethical Recommendations for Robots
Structure and formatting
- With:
- Without
8
2018.11.21 9
INF3490/INF4490
Syllabus:
- Selected parts of the following books (details on course
web page):
– A.E. Eiben and J.E. Smith: Introduction to Evolutionary Computing, Second Edition (ISBN 978-3-662-44873-1). Springer. – S. Marsland: Machine learning: An Algorithmic Perspective. ISBN: 978-1466583283
– On-line papers (on the course web page).
- The lecture notes.
Supporting literature in Norwegian (not syllabus)
Jim Tørresen: hva er KUNSTIG INTELLIGENS Universitetsforlaget Nov 2013, ISBN: 9788215020211
10
Topics:
- Kunstig intelligens og
intelligente systemer
- Problemløsning med kunstig
intelligens
- Evolusjon, utvikling og læring
- Sansing og oppfatning
- Bevegelse og robotikk
- Hvor intelligente kan og bør
maskiner bli?
OPTIMIZATION AND SEARCH
Brief Summary
11
13
Search Landscapes
Some Optimization Methods
- 1. Exhaustive search
- 2. Greedy search and hill climbing
- 3. Simulated annealing
- 4. Gradient descent/ascent
– Not applicable for discrete optimization
Exploitation and Exploration
- Search methods should combine:
– Trying completely new solutions (like in exhaustive search) => Exploration – Trying to improve the current best solution by local search => Exploitation
15
16
17
EVOLUTIONARY ALGORITHMS
Brief Summary
18
The Problem with Hillclimbing
General scheme of EAs
20
Population Parents Parent selection Survivor selection Offspring Recombination (crossover) Mutation Intialization Termination
Genotype vs phenotype
Genotype Phenotype
21
Locus Loci
Representation and variation
- perators
- First stage of building an EA and most difficult one:
choose right representation for the problem
- Type of variation operators needed depends on
chosen representation
- Representations we have seen:
– Binary strings – Integers – Floating-point numbers – Permutations – Trees
22
23
24
Selection in EAs
- Selection can occur in two places:
– Parent selection (selects mating pairs) – Survivor selection (replaces population)
- Selection works on the population
- > selection operators are representation-
independent !
- Selection pressure: As selection pressure
increases, fitter solutions are more likely to survive, or be chosen as parents
25
Effect of Selection Pressure
- Low Pressure
- High Pressure
26
Selection
- Parent selection:
– Fitness Proportionate Selection – Rank-based Selection – Tournament Selection – Uniform Selection
- Survivor selection:
– Elitism – (µ,λ)-selection – (µ+λ)-selection
27
Summary: The standard EA variants
Name Representation Crossover Mutation Parent selection Survivor selection Specialty Genetic Algorithm
Usually fixed-length vector Any or none Any Any Any None
Evolution Strategies
Real-valued vector Discrete or intermediate recombination Gaussian Random draw Best N Strategy parameters
Evolutionary Programming
Real-valued vector None Gaussian One child each Tournament Strategy parameters
Genetic Programming
Tree Swap sub-tree Replace sub-tree Usually fitness proportional Generational replacement None
28
Performance Measures
- Performance measures (off-line)
– Efficiency (alg. speed, also called performance)
- Execution time
- Average no. of evaluations to solution (AES, i.e., number of
generated points in the search space)
– Effectiveness (solution quality, also called accuracy)
- Success rate (SR): % of runs finding a solution
- Mean best fitness at termination (MBF)
- “Working” measures (on-line)
– Population distribution (genotypic) – Fitness distribution (phenotypic) – Improvements per time unit or per genetic operator – …
29
30
31
Hybrid EAs
32
Multi-Objective Evolutionary Alogrithms
- Find a set of non-dominated solutions (approximation
set) following the criteria of: – convergence (as close as possible to the Pareto-
- ptimal front),
– diversity (spread, distribution)
33
Multi-Ojective EAs: Requirements
- 1. Way of assigning fitness and selecting
individuals,
– usually based on dominance
- 2. Preservation of a diverse set of points
– similarities to multi-modal problems
- 3. Remembering all the non-dominated
points you have seen
– usually using elitism or an archive
34
35
36
MACHINE LEARNING
Brief Summary
37
Characteristics of ML
- Learning from examples to analyze new data
- Generalization: Provide sensible outputs for
inputs not encountered during training
- Iterative learning process
- Types:
– Supervised Learning – Reinforcement Learning – Unsupervised Learning
38
Supervised learning
- Training data provided as pairs:
- The goal is to predict an “output” y from an “input x”:
- Output y for each input x is the “supervision” that is
given to the learning algorithm.
– Often obtained by manual annotation – Can be costly to do
- Most common examples
– Classification – Regression
( )
( )
( )
( )
( )
( )
{ }
1 1 2 2
, , , ,..., ,
P P
x f x x f x x f x
( )
= y f x
Neural Networks: McCulloch and Pitts Neurons
- Greatly simplified biological neurons.
- Sum the weighted inputs
- If total is greater than some threshold, neuron “fires”
- Otherwise does not
40
Axon Terminal Branches
- f Axon
Dendrites
Σ
x1 x2 w1 w2 wn xn x3 w3
The Perceptron Network
41
Inputs Outputs
42
Training a perceptron
Σ
x1 x2 xn
. . .
w1 w2 wn
a=Σi=1
n wi xi
1 if a ≥ q
y = 0 if a < q
y
{
inputs weights activation
- utput
q
What Can Perceptrons Represent?
2018.11.21 43
0,0 0,1 1,0 1,1 0,0 0,1 1,0 1,1
AND XOR
- Only linearly separable functions can be represented
by a perceptron
44
45
46
Minsky & Papert (1969) offered solution to XOR problem by combining perceptron unit responses using a second layer of units.
1 2 +1 +1 3
Solution for XOR : Add a Hidden Layer !!
Backpropagation
Rumelhart, Hinton and Williams (1986)
xk xi wki wjk
δj δk
yj
Backward step: propagate errors from
- utput to hidden layer
Forward step: Propagate activation from input to output layer
48
49
The Solution: Cross-Validation
To maximize generalization and avoid overfitting, split data into three sets:
- Training set: Train the model.
- Validation set: Judge the model’s generalization ability
during training.
- Test set: Judge the model’s generalization ability after
training.
50
51