Representations in Deep Neural Nets
Paul Humphreys July 10 2018
Neural Nets Paul Humphreys July 10 2018 Deep learning methods: - - PowerPoint PPT Presentation
Representations in Deep Neural Nets Paul Humphreys July 10 2018 Deep learning methods: those that are formed by the composition of multiple non-linear transformations, with the goal of yielding more abstract -- and ultimately more useful --
Paul Humphreys July 10 2018
“Deep learning methods: those that are formed by the composition of multiple non-linear transformations, with the goal of yielding more abstract -- and ultimately more useful -- representations” (Bengio et al 2014, p. 1) `Deep neural networks exploit the property that many natural signals are compositional hierarchies, in which higher level features are obtained by composing lower level ones. In images, local combinations of edges form motifs, motifs assemble into parts, and parts form objects. Similar hierarchies exist in speech and text from sounds to phones, phonemes, syllables, words and
in the previous layer vary in position and appearance.’ ( LeCun et al 2015, p.439)
3
Definition: (Philosophy) A representation is compositional if what the constituent elements represent remains invariant when the elements are embedded in more complex representations, and what the complex representation represents is a function of the invariant representations of its constituents (and the structure of the complex representation). If a representation R is compositional, then providing an intentional interpretation for the basic (primitive) representations will provide an interpretation for the compound representation R.
One use of the term `abstraction’ in DNN is that minor variations between internal representations of the same type of object, such as small variations in the edge positions, are suppressed. It also can involve amplifying the features that are important. An example of this use is: `For classification tasks, higher layers of representation amplify aspects of the input that are important for discrimination and suppress irrelevant variations’ (LeCun, et al 2015, p.436) This is a similar account of `abstract’ to one OED definition (thanks to Chip Levy for the source): `Considered or understood without reference to particular instances or concrete examples: representing the intrinsic, general properties of something in isolation from the peculiar properties of any specific instance or example’
in a way that is open to explicit scrutiny, analysis, interpretation, and understanding by humans, and transitions between those states are represented by rules that have similar properties. Examples: Linguistic representations in the humanities, most mathematical representations in the sciences.
Example: The representations (if indeed there are any) in many deep neural nets.
6
Right hand image is an inverse Radon transform of the left hand sinogram. Is representational content always preserved under mathematical and computational transformations?
7
8
9
From Fatescapes by Pavel Smejkal
What transformations are permissible in arriving at an effective representational process? Possible Answer: Any transformation that maps the referential content in
preserves the referential content of the initial representation is permissible.
10
Conjecture: The content of the fundamental (primitive) representations in DNNs is determined by the training process. When the training process is supervised, there will be a contribution to the content from human
initio reinforcement learning contains very little interpretative content.
An instrument has the knowledge that F if and only if the instrument contains a representation of the entirety of F, the representation holds of the target, and a reliable process forms the representation, where F is a fact and a reliable representation-producing process is
12