Admininstrative notes Projects due today at 11:59pm! You can - - PowerPoint PPT Presentation

admininstrative notes
SMART_READER_LITE
LIVE PREVIEW

Admininstrative notes Projects due today at 11:59pm! You can - - PowerPoint PPT Presentation

Admininstrative notes Projects due today at 11:59pm! You can handin something multiple times (just use the Overwrite Previous checkbox if you are resubmitting) so dont be afraid to test handin early! Take a screenshot and


slide-1
SLIDE 1

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

Admininstrative notes

  • Projects due today at 11:59pm! You can handin

something multiple times (just use the “Overwrite Previous” checkbox if you are resubmitting) so don’t be afraid to test handin early!

  • Take a screenshot and email your TA right away if

something happens and you are not able to submit your project on time.

  • Make sure your project deliverables (just the

deliverable, not the reports) do not have your names

  • n it. For example, your research paper shouldn’t

have your name on it but your group report should.

slide-2
SLIDE 2

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

Admininstrative notes

  • Blind review assignments have been sent to

your CS_ID@ugrad.cs.ubc.ca email. If you did not receive it, please email your project TA.

slide-3
SLIDE 3

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

Admininstrative notes

  • We are offering 2% on top of your final grade if you

present your project to the class on April 6th. We are looking for five teams to present and in the case where more than five teams volunteer, their peer presentation grade will be used to determine who

  • presents. Email Jessica at jhmwong@cs.ubc.ca if you

are interested. Everyone who has signed up has been responded to— if you don’t have a response yet, please resend your email.

slide-4
SLIDE 4

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

Artificial Intelligence

Part 4: Will robots take over the world?

slide-5
SLIDE 5

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

Learning goals

  • CT Impact: Students will be able to evaluate a

job and say whether or not a computer is likely to be able to do that job in the next 20 years

  • CT Impact: Students will be able to argue

whether they believe that AI is a threat using arguments that show an understanding of CT building blocks.

slide-6
SLIDE 6

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

So…. will computers and robots take

  • ver the world?
  • First, we have to decide what does that even mean?
  • Let’s start by looking at one thing that’s in the news:

jobs

slide-7
SLIDE 7

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

Quick individual exercise

Write down three careers that you’re interested in – for the long run, not just a temporary job. We’re not asking for a deep commitment – just come up with things you’re interested in.

slide-8
SLIDE 8

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

Chances that a robot will take over your job in the next 20 years

http://www.businessinsider.com/likelihood-of-your-job-being- taken-over-by-robots-2016-8

slide-9
SLIDE 9

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

Do you want your career to be any of the jobs listed that are > 75% likely to be done by robots?

  • A. Yes B. No

http://www.businessinsider.com/likelihood-of-your-job-being- taken-over-by-robots-2016-8

slide-10
SLIDE 10

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

Group exercise

  • Go to

http://www.npr.org/sections/money/2015/05/21/4082 34543/will-your-job-be-done-by-a-machine (On lecture page, or search for “NPR robot job”)

  • Check out at least one job for at least each person

in the group and see how likely they are to be taken

  • ver by robots (keep track of percentages)
slide-11
SLIDE 11

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

Clicker question

How many jobs that your group wanted were more than 75% likely to be taken over by robot?

  • A. All of them
  • B. More than half, but less than all
  • C. Half
  • D. Less than half, but more than none
  • E. None
slide-12
SLIDE 12

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

Let’s look at the top job to be automated: Loan Officer. What does a loan officer do?

1. Approve loans within specified limits 2. Meet with applicants to obtain information and answer questions 3. Analyze applicants' financial status, credit, and property evaluations to determine feasibility of granting loans. 4. Explain to customers the different types of loans and their terms 5. Obtain and compile copies of loan applicants' financial information. 6. Review and update credit and loan files. 7. Review loan agreements to ensure that they are complete and accurate according to policy. 8. Compute payment schedules. 9. Stay abreast of new types of loans In a group, list what computers would have to do to automate this job. Divide the list into (a) things computers can do today and (b) things they can’t do yet. http://job-descriptions.careerplanner.com/Loan- Officers.cfm

slide-13
SLIDE 13

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

Okay, that one’s pretty clear cut. Let’s look a little further down

“Taxi drivers and chauffeurs” – 89% chance Obviously, this requires driving. In a group, list what computers have to be able to do in order to drive.

slide-14
SLIDE 14

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

Driverless cars have come a long way in 15 years

https://www.youtube.com/watch?v=TsaES-- OTzM

slide-15
SLIDE 15

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

But it’s not all about technology

Group discussion: how safe would you feel riding in a driverless car? More, less, or the same than in a regular car?

  • A. More safe
  • B. Equally safe
  • C. Less safe
slide-16
SLIDE 16

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

What about a steering wheel?

Group discussion: would having a requirement to have a licenced driver behind a steering wheel make you feel more safe, less safe, or the same? Why?

  • A. More safe
  • B. Equally safe
  • C. Less safe
slide-17
SLIDE 17

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

What about a steering wheel?

Group discussion: would having a requirement to have a licenced driver behind a steering wheel make you feel more safe, less safe, or the same? Why?

slide-18
SLIDE 18

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

A big problem is liability

Consider driverless car accidents. It can be tricky to determine whether the person at fault is the car manufacturer, the software manufacturer, or the car’s

  • wner.

http://www.npr.org/sections/alltechconsidered/2016/09/20/494765472 /regulating-self-driving-cars-for-safety-even-before-theyre-built

slide-19
SLIDE 19

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

A big problem is liability Case study: Who was to blame?

In July 2016, a 40 year old man was killed using Tesla’s Autpilot function on the highway when a truck pulled across the road to make a turn. The autopilot (and presumably driver) failed to see the truck and slammed into it.

Some additional factors:

  • The driver was going 74 MPH in a 65 MPH zone. The driver manually

set this speed.

  • The autopilot is in “beta”: Tesla reminds drivers that it is only to

supplement a fully-alert driver

  • The car shut down the motor the instant of the crash
  • The weather was very sunny
  • Europe has a law to require a bar on the bottom of trucks that would

have likely stopped the accident

http://www.theregister.co.uk/2016/07/28/tesla_autopilot_death_driver_was_speeding/

slide-20
SLIDE 20

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

A big problem is liability Case study: Who was to blame?

In July 2016, a 40 year old man was killed using Tesla’s Autpilot function on the highway when a truck pulled across the road to make a turn. The autopilot (and presumably driver) failed to see the truck and slammed into it.

  • A. The driver B. Tesla C. Other

http://www.theregister.co.uk/2016/07/28/tesla_autopilot_death_driver_was_speeding/

slide-21
SLIDE 21

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

Another issue: Adversarial attacks

"Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they’re like optical illusions for machines.” – Ian Goodfellow and colleagues at OpenAI

https://blog.openai.com/adversarial- example-research/

slide-22
SLIDE 22

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

Another issue: Adversarial attacks

"Adversarial examples have the potential to be

  • dangerous. For example, attackers could target

autonomous vehicles by using stickers or paint to create an adversarial stop sign that the vehicle would interpret as a ‘yield’ or other sign”

https://blog.openai.com/adversarial- example-research/

See: Papernot and colleagues: https://arxiv.org/pdf/1602.02697.pdf Original image Perturbed image

slide-23
SLIDE 23

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

That does not, however, mean that the

  • ther jobs will not change
  • Let’s look at my job
  • In 2011 MOOCs (Massive Open Online Courses)

came on the scene and were predicted to take over higher education within a decade

slide-24
SLIDE 24

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

Group exercise: Have you taken a MOOC? Why or why not?

slide-25
SLIDE 25

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

What happened to MOOCs?

But some changes have persisted:

  • Online classes
  • Blended classes
  • More videos and such for other classes

http://www.chronicle.com/article/MOOCs-Are-Dead- Long-Live/237569?cid=at

slide-26
SLIDE 26

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

Group exercise: How do you think university education will change in the next 20 years?

slide-27
SLIDE 27

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

Does this mean there won’t be enough jobs?

All this has happened before

https://www.theguardian.com/business/2015/aug/17/technology- created-more-jobs-than-destroyed-140-years-data-census

Launderers Agriculture workers

slide-28
SLIDE 28

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

So what happened to all those people? What are they doing?

Accountants Hairdressers

slide-29
SLIDE 29

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

Clicker question

Of the three jobs that you were interested in, how many existed in 1871?

  • A. None
  • B. 1
  • C. 2
  • D. 3
slide-30
SLIDE 30

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

Closing note on jobs…

  • Automation of jobs is not going to be smooth
  • But I don’t think we’ll all be out of jobs, either
slide-31
SLIDE 31

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

To have jobs, we have to survive. Can we do that? What can robots do now?

Let’s first look at some of today’s robots from Boston Dynamics: https://www.youtube.com/watch?v=tf7IEVTDjng Group exercise: Do these robots worry you?

  • A. I’m not worried
  • B. This worries me a bit
  • C. This worries me a lot
slide-32
SLIDE 32

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

That’s where we are Where might we go?

Next let’s look at sci fi portrayals of robots gone bad:

https://www.youtube.com/watch?v=ARJ8cAGm6JE

slide-33
SLIDE 33

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

HAL from 2001: A Space Odyssey

Group discussion: HAL from 2001 seems to be a self- aware computer. How likely do you think that we’ll build self-aware computers? If you do think that we will, when will this happen?

  • A. In the next 10 years
  • B. Between 10 years and 25 years
  • C. More than 25 years
  • D. Never
slide-34
SLIDE 34

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

HAL from 2001: A Space Odyssey

Group discussion: HAL from 2001 seems to be a self- aware computer. How likely do you think that we’ll build self-aware computers? If you do think that we will, when will this happen?

slide-35
SLIDE 35

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

What do the experts think about a related question

https://www.technologyreview.com/s/602410/no-the-experts-dont-think- superintelligent-ai-is-a-threat-to-humanity/

slide-36
SLIDE 36

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

What do the experts think about a related question

https://www.technologyreview.com/s/602410/no-the-experts-dont-think- superintelligent-ai-is-a-threat-to-humanity/

An important distinction: The experts were predicting intelligence on par with humans. They were not predicting self-awareness of computers.

slide-37
SLIDE 37

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

With great power comes great responsibility

https://www.youtube.com/watch?v=4DQsG3TKQ0I Regardless of whether computers can become self-aware, they certainly can do damage. What are some ethical rules that you think should be programmed into computers?

slide-38
SLIDE 38

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

This is something that needs to be considered sooner rather than later

Consider a recent example: “After sniper fire struck 12 police

  • fficers at a rally in downtown Dallas, killing five, police cornered

a single suspect in a parking garage. After a prolonged exchange of gunfire and a five-hour-long standoff, police made what experts say was an unprecedented decision: to send in a police robot, jury-rigged with a bomb.” Is this okay? Why or why not?

  • A. Yes, it’s okay
  • B. No, it’s not okay

http://www.npr.org/sections/thetwo-way/2016/07/08/485262777/ for-the-first-time-police-used-a-bomb-robot-to-kill

slide-39
SLIDE 39

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

Computer scientists are working on those ethical considerations right now

There are several new initiatives including:

  • The One Hundred Year Study on Artificial

Intelligence (led by Eric Horvitz at Stanford; two UBC faculty members are participating: Kevin Leyton-Brown and Alan Mackworth)

http://www.nytimes.com/2016/09/02/technology/artificial-intelligence- ethics.html?hpw&rref=technology&action=click&pgtype=Homepage&module=well- region&region=bottom-well&WT.nav=bottom-well

slide-40
SLIDE 40

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

Computer scientists are working on those ethical considerations right now

There are several new initiatives including:

  • Efforts to develop a Standard of Ethics by several

large tech companies (Amazon, Facebook, Google, IBM, Microsoft)

http://www.nytimes.com/2016/09/02/technology/artificial-intelligence- ethics.html?hpw&rref=technology&action=click&pgtype=Homepage&module=well- region&region=bottom-well&WT.nav=bottom-well

Jeff Bezos of Amazon, Virginia Rometty of IBM, Satya Nadella of Microsoft, Sundar Pichai of Google, and Mark Zuckerberg of Facebook.

slide-41
SLIDE 41

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

Computer scientists are working on those ethical considerations right now

“’There is a role for government and we respect that,’ said David Kenny, general manager for IBM’s Watson artificial intelligence division. The challenge, he said, is ‘a lot of times policies lag the technologies.’”

http://www.nytimes.com/2016/09/02/technology/artificial-intelligence- ethics.html?hpw&rref=technology&action=click&pgtype=Homepage&module=well- region&region=bottom-well&WT.nav=bottom-well

slide-42
SLIDE 42

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

And last, but not least, do you think robots and computers will destroy the earth? Why or why not?

  • A. Yes
  • B. No
slide-43
SLIDE 43

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

And last, but not least, do you think robots and computers will destroy the earth? Why or why not?

slide-44
SLIDE 44

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

Food for thought

“Amid all this activity, a picture of our AI future is coming into view, and it is not the HAL 9000—a discrete machine animated by a charismatic (yet potentially homicidal) humanlike consciousness… The AI on the horizon looks more like Amazon Web Services—cheap, reliable, industrial-grade digital smartness running behind everything, and almost invisible except when it blinks off. This common utility will serve you as much IQ as you want but no more than you need…”

http://www.wired.com/2014/10/future-of- artificial-intelligence/

slide-45
SLIDE 45

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

Food for thought

“In the past, we would have said only a superintelligent AI could drive a car, or beat a human at Jeopardy! or

  • chess. But once AI did each of those things, we

considered that achievement obviously mechanical and hardly worth the label of true intelligence. Every success in AI redefines it.” – Kevin Kelly

http://www.wired.com/2014/10/future-of- artificial-intelligence/

slide-46
SLIDE 46

Computational Thinking www.ugrad.cs.ubc.ca/~cs100

Learning goals revisited

  • CT Impact: Students will be able to evaluate a

job and say whether or not a computer is likely to be able to do that job in the next 20 years

  • CT Impact: Students will be able to argue

whether they believe that AI is a threat using arguments that show an understanding of CT building blocks.