Functionalism What makes something a mental state (of a given kind) - - PowerPoint PPT Presentation

functionalism what makes something a mental state of a
SMART_READER_LITE
LIVE PREVIEW

Functionalism What makes something a mental state (of a given kind) - - PowerPoint PPT Presentation

Functionalism What makes something a mental state (of a given kind) is not the substance that it is made of, but the overall role that it plays in a larger system of which it is a part. Computational Theory of Mind The human mind is literally a


slide-1
SLIDE 1

Functionalism What makes something a mental state (of a given kind) is not the substance that it is made of, but the overall role that it plays in a larger system of which it is a part. Computational Theory of Mind The human mind is literally a computer — a system for processing information.

slide-2
SLIDE 2

T uring Machine

slide-3
SLIDE 3

T uring Completeness …a system of data-manipulation rules (such as a computer's instruction set, a programming language, or a cellular automaton) is said to be T uring complete or computationally universal if it can be used to simulate any T uring machine.

—Wikipedia, T uring Completeness

slide-4
SLIDE 4

Applied AI The use of computers to do things that would previously have required human intelligence. Weak AI The use of computational models to study human thought by simulating it. Strong AI The claim that if we were to create a perfect simulation

  • f a human mind, it would itself be a mind
slide-5
SLIDE 5
slide-6
SLIDE 6
slide-7
SLIDE 7
slide-8
SLIDE 8
slide-9
SLIDE 9

The Imitation Game
 a.k.a. The T uring T est

slide-10
SLIDE 10

The Imitation Game
 a.k.a. The T uring T est

slide-11
SLIDE 11

Searle’s Chinese Room Thought Experiment

slide-12
SLIDE 12

How Good Were Turing’s Predictions? Turing thought (in 1950) that we would have computers with a storage capacity of 109 bits by the year 2000. That’s one gigabit, which is about 125 megabytes. Turing overestimated how long it would take for computers to increase in storage capacity. There were 125+MB storage drives already in the 1960s. And lots of people had personal computers with this much storage by the late 1990s. Some have speculated that Google’s servers have a storage capacity of 15 exabytes.* That’s equivalent to: 15,000 petabytes
 15,000,000 terrabytes
 15,000,000,000 gigabytes
 120,000,000,000 gigabits So that’s 120 billion times more storage than Turing estimated would be needed for a computer to win at the imitation game 70% of the time.

*https:/ /what-if.xkcd.com/63/

slide-13
SLIDE 13

How Good Were Turing’s Predictions? A question: what is the storage capacity of the human brain? The short answer: we don’t know, because we don’t fully understand how information is stored in the human brain. But a recent educated guess is that the human brain has a storage capacity of between 1 and 2.5 petabytes. Let’s suppose, for the sake of argument, that the larger of those two numbers right. In that case, human brains have this much storage: 2.5 PB
 2,500 TB
 2,500,000 GB
 20,000,000 gigabits In that case, we have 20 million times as much storage as Turing’s hypothetical computer from the year 2000. But each human has only 1/6000 as much storage as Google’s servers are speculated to have.

Bartol Jr et al (2015): ‘Nanoconnectomic Upper Bound on the Variability of Synaptic Plasticity’, eLife 4:e10778. https:/ /elifesciences.org/content/4/e10778

slide-14
SLIDE 14

How Good Were Turing’s Predictions? Suppose all this wild speculation is correct. If google has more storage than a human, why can’t google win at the imitation game?

slide-15
SLIDE 15
slide-16
SLIDE 16
slide-17
SLIDE 17
slide-18
SLIDE 18
slide-19
SLIDE 19

1 1 1 1 1 1 1 1

slide-20
SLIDE 20

cute

slide-21
SLIDE 21

not cute

slide-22
SLIDE 22
slide-23
SLIDE 23
slide-24
SLIDE 24

The Language-of-Thought Hypothesis

  • The mind is literally an information-processing device

—a piece of software running on the brain.

  • Beliefs, desires, etc. are literally tokens of sentences

that our mind uses to represent, store, and compute information.

  • Depending on “where” in the system these sentences

are tokened, they count as beliefs, desires, etc.

  • (Fodor often talks about the “desire box” and the

“belief box”, etc. Kukla and Walmsley talk about “bins” instead”)

  • These “boxes” are defined functionally, not spatially.
slide-25
SLIDE 25

How is this Language Encoded in the Brain?

belief that Jay loves Bey

cat 099 097 116 01100011 01100001 01110100 ⋮

slide-26
SLIDE 26

T uring Machine

slide-27
SLIDE 27

The Intentional Stance

  • A system has whichever beliefs and desires (etc.) it

would make the most sense to interpret it as having.

  • Beliefs and desires are real because interpreters pick

up on real patterns of thought and behavior when ascribing them.

slide-28
SLIDE 28

The Intentional Stance

  • A belief needn’t be identical to any particular neural
  • state. Beliefs aren’t (always) “sentences written in the

brain”.

  • They are holistic properties of systems.
  • It doesn’t make sense to ask how many beliefs

someone has.

  • There’s nothing really wrong with saying that groups,

thermometers, Google, etc., have beliefs.

  • It’s just a question of how useful it would be to do so,

and how genuine the pattern being picked up on is.

slide-29
SLIDE 29

Applied AI The use of computers to do things that would previously have required human intelligence. Weak AI The use of computational models to study human thought by simulating it. Strong AI The claim that if we were to create a perfect simulation

  • f a human mind, it would itself be a mind
slide-30
SLIDE 30
slide-31
SLIDE 31

Question Why think that thoughts are like sentences?

slide-32
SLIDE 32

Analogy 1: Productivity and Recursivity

If you speak a natural language, you can use and understand infinitely many sentences:

  • John loves his mother.
  • John loves his mother’s mother.
  • John loves his mother’s mother’s mother. […]

Similarly, you can think an infinite number of thoughts.

  • the thought that John loves his mother
  • the thought that John loves his mother’s mother.
  • the thought that John loves his mother’s mother’s mother. […]
slide-33
SLIDE 33

Analogy 2: Systematicity

If you understand this sentence:

  • Jay loves Bey.

Then you also understand this sentence:

  • Bey loves Jay.

Similarly, if you can have this thought:

  • The belief that Jay loves Bey

Then you can also have this thought:

  • The belief that Bey loves Jay
slide-34
SLIDE 34

Parts and Structure

Jay NP VP loves V Bey VP S Bey NP VP loves V Jay VP S Bey NP VP loves V Blue Ivy VP S

slide-35
SLIDE 35

Analogy 3: Vocabulary and Conceptual Repertoire

Socrates didn’t have the following words in his vocabulary

  • dog
  • therefore

So, he couldn’t understand sentences that contained those words. Socrates didn’t possess the following concepts:

  • carburator
  • cell phone

So, he couldn’t have thoughts about things of these kinds.

slide-36
SLIDE 36

Analogy 4: Logical Relations

In a language, logical relationships depend on internal sentence

  • structure. Consider the following argument:
  • If Fodor is right, we have computers in our heads.
  • Fodor is right
  • Therefore: we have computers in our heads.
slide-37
SLIDE 37

Analogy 4: Logical Relations

Practical syllogism is sensitive to the structure of our thoughts in the same way:

belief desire executive control that I will get a good grade on the test that I will get a good grade on the test

  • nly if I will study

I will study

slide-38
SLIDE 38

The Explanation: Compositionality

The meaning of a sentence is systematically determined by the meanings of its basic parts (words/morphemes), together with the syntactic structure in which they’re arranged. The propositional content of a thought is determined by the contents

  • f its parts (concepts) together with the way in which the thought is

structured.