Ttulo do captulo Luis Lamb 8 May 2017 Dagstuhl, DE Summary Dies - - PowerPoint PPT Presentation

t tulo do cap tulo
SMART_READER_LITE
LIVE PREVIEW

Ttulo do captulo Luis Lamb 8 May 2017 Dagstuhl, DE Summary Dies - - PowerPoint PPT Presentation

From Turing to deep learning: explaining AI through neurons and symbols Ttulo do captulo Luis Lamb 8 May 2017 Dagstuhl, DE Summary Dies ist im Wesentlichen die gleiche Vorlesung, die ich auf der City University in London im


slide-1
SLIDE 1

Título do capítulo

Luis Lamb – 8 May 2017 Dagstuhl, DE From Turing to deep learning: explaining AI through neurons and symbols

slide-2
SLIDE 2

Summary

Dies ist im Wesentlichen die gleiche Vorlesung, die ich auf der City University in London im vergangenen Jahr vorgestellt habe. Allerdings war der Anteil der deutschen Redner in London höher als hier. Also werde ich auf Englisch sprechen.

slide-3
SLIDE 3

Summary

This is essenEally the same lecture I lectured at City last year, however...

slide-4
SLIDE 4

Summary

... the percentage of German speakers in London was larger than here. So, I will speak in English.

Mark Rylance: “... does it help?”

slide-5
SLIDE 5

Summary

This is essenEally the same lecture I presented at City University in London last year; however, the percentage of German speakers in London was higher than here. So, I will speak in English. This is the same lecture I have presented at the University of London last year; However, the percentage of German speakers in London was higher than here. So, I will speak in English. google already knows that City is now part of the University

  • f London!

what is “essenEal” there... well, not this word it seems... CogniEve temporal reasoning rules... AI rules...

slide-6
SLIDE 6

Summary

The cogniEve revoluEon A bit of history CogniEon, AI and Computer Science Learning to Reason Social Problem-Solving ATerword

slide-7
SLIDE 7

Porto Alegre, Rio Grande do Sul, Brazil

  • 11.2 million people
  • Roughly the size of Britain
  • Life exp. 76.9 (2010)
  • HDI 0.746 (76th)
  • 4th GDP in Brazil ~100B US$
  • Capital of the state
  • 1.47 million people
  • 11th largest city in Brazil
  • High tech industry (3rd/4th in BR)
  • Three large universiEes

Uruguay Argen+na Porto Alegre RS SP

slide-8
SLIDE 8

Porto Alegre, Rio Grande do Sul

slide-9
SLIDE 9

9

UFRGS – Central Campus

slide-10
SLIDE 10

10

Universidade Federal do Rio Grande do Sul (UFRGS)

  • Founded in 1934 (first schools from1895: Pharmacy, Eng, Medicine, Law)
  • 86 undergraduate programs/80 PhD programs
  • Top five university in Brazil, several rankings
  • 870+ Research Groups – registered in the Brazilian Research Council CNPq.
  • Approximately 2,800 faculty members, 2,500 PhDs.
  • UFRGS Students: ~27 K (undergraduate) and ~9 K (postgraduate)
  • >663 CNPq Fellowships (Advanced research fellowships)
  • Ranked best university in Brazil, by the Ministry of Education, 2013-2015.
slide-11
SLIDE 11

Institute of Informatics (INF-UFRGS)

  • 73 full-Eme faculty/2 part-Eme; 55 supervisors in

graduate program

  • 5 new compeEEve vacancies 2017
  • Young faculty: >20 hired in the last decade
  • Faculty PhD backgrounds:

Brazil(26 - 4 UniversiEes), France(14/5), Germany(8/5), UK(6/4), Scotland*(1), USA(4/3), Canada(2/2), Belgium(2/2), Sweden (1), Switzerland(2), Portugal(2/2)

  • PostDocs: 15 US, 8 FR, 6 UK, 2 CAN, 3+ DE, IT, ND, BE, DN.

Computer Science & CSEng (BSc)

  • Top rankings, 900+ students

(Post)graduate Programme in CS Top 5 program in CS in Brazil Currently: 300 students (MSc and PhD) Graduated over 250 PhDs and 1400 MSc.

slide-12
SLIDE 12

12

Universities: institutions of the (modern) world

  • First Inca Emperor: Manco Capac, around 1200 . Details of many emperors lost

during Spanish occupation.

  • Manco Capac (Manku Qhapaq, 1100-?)
  • ...
  • Huayna Capac (Wayna Qhapaq, 1493 – 1527)
  • Huáscar (Waskhar, 1527 – 1532)
  • Atahualpa (Ataw Wallpa, 1532 – 1533)
  • Tupac Hualpa (Topa Huallpa, 1533 – 1535), puppet emperor, Spanish rule.

University of Oxford: 1096 ... University of Cambrige: 1209 ... British Empire: ~1497 – 1945 Univ of St Petersburg: 1724... Harvard 1636...

slide-13
SLIDE 13
slide-14
SLIDE 14
slide-15
SLIDE 15

Source: ROYAL SOCIETY report: April 2017 “Machine learning: the power and promise

  • f computers that learn by

example”

These are all related to NSC

slide-16
SLIDE 16

Source: ROYAL SOCIETY report: April 2017 “Machine learning: the power and promise

  • f computers that learn by example”.

ISBN: 978-1-78252-259-1

This is directly related to the topics of this seminar

slide-17
SLIDE 17

The present

  • ... but it’s been a long journey...
slide-18
SLIDE 18

The Cognitive Revolution

  • Yuval Noah Harari: Sapiens: A Brief History of Human Kind.

Vintage, London 2014.

slide-19
SLIDE 19

The Cognitive Revolution – a timeline - Harari

  • 13.5 billion years ago: maper/energy appears; atoms, molecules.
  • 4.5 billion y.a.: Earth is formed
  • 3.8 billion: Organisms
  • 6 million: last common ancestor man/chimpanzee
  • 2.5 million: genus homo – Africa
  • 2 million: humans go to Eurasia/evoluEon of different human species
  • 500k: Neanderthals evolve in Middle East/Europe
  • 300k: fire
  • 200k: Homo sapiens evolve in Africa
  • 70k: CogniMve revoluMon: ficMve language; Homo sapiens spread out of

Africa

  • 45k: Homo sapiens in Australia: exEncEon of local megafauna
  • 30k: Neanderthals exEnct.
  • 16k: Homo sapiens in America: exEncEon of local megafauna
  • 13k: sapiens rule the world
  • 10k: Agriculture; domesMcaMon; permanent seplements
  • 5k: first kingdoms, script, money; polytheism.
  • 2.5k: coinage (money); Persians; Buddhism.
  • 2k: ChrisEanity; Roman Empire; Han empire in China.
  • 500: SCIENTIFIC REVOLUTION
  • 200: Industrial RevoluEon
slide-20
SLIDE 20

§ Architect of the Scientific Revolution § QC, Elizabeth I; “Lord Chancellor”, James I. § Scientific method; empiricism. § Science as an innovation activity to improve life. § Helped to unveil human ignorance. § Legal system influenced Le Code Napoléon ( C o d e c i v i l d e s f r a n ç a i s ) ; innovations: freedom and merit.

Sir Francis Bacon (1561-1626)

“Knowledge is power”, 1597.

slide-21
SLIDE 21

Innovation is Power

§ "First, I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the Moon and returning him safely to the Earth. No single space project in this period will be more impressive to mankind, or more important for the long-range exploration of space; and none will be so difficult or expensive to accomplish.” JFK 25 May 1961. § “That’s one small step for [a] man, one giant leap for mankind” Neil Armstrong, 20 Jul 1969. “Discoveries typically aren’t made by people trying to solve a problem, or invent

  • something. Major discoveries are not made in

the lab. They are made in the minds of

  • scienDsts. ScienDfic research is what you do

when you don’t know what you are doing.” - Daniel Zajfman – President, Weizmann InsEtute of Science

slide-22
SLIDE 22

Cognition is Power

§ Turing, Simon, Newell: AI, cognition, ambition. § Watson/Deep Blue: gigantic amounts of computing (reasoning on) data; 1997: Gary Kasparov walks away from Deep Blue “I could feel – I could smell – a new kind of intelligence across the table... Although I think I did see some signs of intelligence, it’s a weird kind, an inneficient, inflexible kind that maked me feel I have a few years left.” Gary Kasparov, 1997. § Watson (2011): wins Jeopardy!, beats the best human players. § Question answering (QA) + several AI techniques: NLP, KR, ML, IR and

  • ther acronyms.

§ Computes 500Gb/s: 1 million books/s.

slide-23
SLIDE 23

Cognition is Power

slide-24
SLIDE 24

Cognition is Power

But, is it really true, in the “real world”?

slide-25
SLIDE 25

Reality check: The IT Business World Today 2016 Real world trends (according to Gartner, McKinsey, Wired, MIT TR)

  • 1. TransformaMon of health care / internet DNA / rise of cogniEve

therapy

  • 2. CompuMng Everywhere / Integrated digital-physical

experiences

  • 3. The Internet of Things / the internet of all things
  • 4. 3D PrinMng / 4D prinEng
  • 5. Advanced, Pervasive and Invisible AnalyMcs / big data,

advanced analyEcs

  • 6. Context-Rich Systems / ambient intelligence
  • 7. Smart Machines / car-to-car communicaEon
  • 8. Risk-Based Security and Self-ProtecMon / personal darknets in

the spot light

  • 9. The next three billion digital ciMzens / mobilizing the next four

billion

  • 10. Apple Pay / e-currencies
  • 11. Cloud Client CompuMng / realizing anything as a service
slide-26
SLIDE 26

New York Times: Aug 17th, 2015

“Instead of turning the planet into a “Terminator”-like baElefield, machines may be able to pierce the fog of war beEer than humans can, offering at least the possibility of a more humane and secure

  • world. We deserve a chance to find out.”

Quote from Jerry Kaplan, who teaches about the ethics and impact

  • f arEficial intelligence at Stanford, is the author of “Humans Need

Not Apply: A Guide to Wealth and Work in the Age of ArEficial Intelligence.”

slide-27
SLIDE 27

How trends and technologies fare?

“Successful organizaDons face considerable difficulty in maintaining their strength and

  • might. Of the 25 largest industrial corporaDons

in 1900, only two have remained in that select

  • company. The rest have failed, been merged
  • ut of existence, or simply fallen in size. Figures

like these help to remind us that corporaDons are expendable and that success – at best – is an impermanent achievement which can always slip out of hand.”

Thomas J. Watson, Jr., Chairman, IBM 1963.

slide-28
SLIDE 28

Cognitive revolutions?

slide-29
SLIDE 29

Cognitive Computation

slide-30
SLIDE 30

Enters Turing

The whole thinking process is s+ll rather mysterious to us, but I believe that the aEempt to make a thinking machine will help us greatly in finding out how we think ourselves. Alan Turing, 15 May 1951, “Can Digital Machines Think” BBC.

slide-31
SLIDE 31

Enters Turing

Although the class of computable numbers is so great, and in many ways similar to the class of real numbers, it is nevertheless enumerable. In §8 I examine certain arguments which would seem to prove the

  • contrary. By the correct applica+on of one of these arguments,

conclusions are reached which are superficially similar to those of Gödel [1]. These results have valuable applica+ons. In par+cular, it is shown (§11) that the Hilber+an Entscheidungsproblem can have no solu+on. In a recent paper Alonzo Church[2] has introduced an idea of ``effecDve calculability'', which is equivalent to my ``computability’’, but is very differently defined. Church also reaches similar conclusions about the Entscheidungsproblem [3]. The proof of equivalence between ``computability'' and ``effec+ve calculability' is outlined in an appendix to the present paper.

[1] Uber formal unentscheidbare Satze der Principia MathemaEca und verwandter Systeme,I. MonatsheTe fur MathemaEk und Physik, 38 (1931):173-198. [2] An unsolvable problem of elementary number theory, Amer. J Math, 58(1936): 345-363. [3] A note on the Entscheidungsproblem. J. of Symbolic Logic 1(1936): 40-41.

slide-32
SLIDE 32

Can machines think?

I propose to consider the ques+on, "Can machines think?" This should begin with definiDons of the meaning of the terms "machine" and "think." The defini+ons might be framed so as to reflect so far as possible the normal use of the words,... but this adtude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the ques+on, "Can machines think?" is to be sought in a sta+s+cal survey such as a Gallup poll. But this is absurd....

“We can only see a short distance ahead, but we can see plenty there that needs to be done.” A.M. Turing in “Computing Machinery and Intelligence”, Mind,1950.

slide-33
SLIDE 33

Can machines think?

How much has been done?

slide-34
SLIDE 34

Can machines learn and reason?

Data from DBLP Vis

slide-35
SLIDE 35

Is learning relevant?

Data from DBLP Vis

slide-36
SLIDE 36

Neural learning?

Data from DBLP Vis

slide-37
SLIDE 37

Is reasoning relevant?

Data from DBLP Vis

slide-38
SLIDE 38

The New York Times, June 3, 2015:

“In the 1960s, when John McCarthy, the scien+st who coined the term “ar+ficial intelligence,” ... he claimed that building a working ar+ficial intelligence system would take a decade. When that did not happen, the field went through periods of decline in the 1970s and 1980s, which have since been described as “A.I. winters.” Now rapid progress in a hot ar+ficial intelligence field known as “deep learning” has touched off a compu+ng arms race among powerful companies like Facebook, Google, IBM, Microsoo and Baidu, and scien+sts at each company have trumpeted improved performance in vision and speech recogni+on.” “In the past year, technologists and scien+sts like Elon Musk, founder of Tesla; Stephen Hawking, the celebrated physicist; and Bill Gates, co-founder of Microsoo, have warned that the poten+al emergence of self-aware compu+ng systems might prove to be an existen+al threat to humanity.”

slide-39
SLIDE 39

What happened from 2007-2017?

At AAAI-07, arguably the flagship, top AI Conference there were virtually no papers which made any kind of use of ArDficial Neural Networks

(by the way: there was a paper on Neural-Symbolic Learning: Lamb, Borges & d’Avila Garcez)

So, how deep learning, a.k.a. deep ar+ficial neural networks developed?

slide-40
SLIDE 40

What happened from 2007-2017?

We stress that it’s impossible to give a fair review of the literature, at this Mme. Long short-term memory (LSTM): Sepp Hochreiter & Jürgen Schmidhuber: Long Short-Term Memory, Neural ComputaEon, 9(8):1735–1780, 1997. and many other... Imagenet classificaEon with deep convoluEonal neural networks. A. Krizhevsky, I. Sutskever, GE Hinton, NIPS 2012. A fast learning algorithm for deep belief nets. GE Hinton, S Osindero, YW Teh Neural computaMon 18 (7), 1527-1554, 2006. Bengio, Yoshua; LeCun, Yann; Hinton, Geoffrey (2015). "Deep Learning". Nature. 521: 436–444. Bengio, Y.; Courville, A.; Vincent, P. (2013). "Representa+on Learning: A Review and New Perspec+ves". IEEE TransacDons on Pa`ern Analysis and Machine

  • Intelligence. 35 (8): 1798–1828

Schmidhuber, J. (2015). "Deep Learning in Neural Networks: An Overview". Neural

  • Networks. 61: 85–117
slide-41
SLIDE 41

Can machines think?

We may hope that machines will eventually compete with men in all purely intellectual fields. But which are the best ones to start with? Even this is a difficult decision. AM Turing: "CompuEng Machinery and Intelligence", Mind 59:433–460, 1950

slide-42
SLIDE 42

Evolution of Computer Science: from 1950

  • First computers were hard to programme:
  • - machine language is highly non-intuitive and error-prone,

except perhaps for Turing or von Neumann.

  • There was a huge demand for fast “calculating” machines, specially in

the defense industry.

  • Turing, Zuze, von Neumann, Wilkes, Flowers, Mauchly: directly (or

indirectly) involved in defense projects.

  • Much of Turing’s work was classified until 1970’s or even 2012 (“Report
  • n the appliction of probability to cryptography” and “Paper on statistics
  • f repetitions”).
  • The above requirements/facts influenced computing research.

Slides by Luis C. Lamb

slide-43
SLIDE 43

Alan Newell and Herbert Simon

  • Numerous contributions to AI, problem solving, psychology of cognition;

decision making, list processing before LISP, since the 1950s

  • Simon is the only person to have won both the Turing Award and

the Nobel Prize in Economic Sciences.

  • AI programs (Newell, Simon, Shaw): the Logic Theorist (1956: proved 38

theorems of Whitehead & Russell’s Principia Mathematica).

  • General Problem Solver (1957): theorems, geometric problems and

chess playing. Separated knowledge from strategy.

hpp://diva.library.cmu.edu/ Newell/biography.html

slide-44
SLIDE 44

Recent AI Turing Award Winners

  • 1994: Edward Feigenbaum (Stanford) and Raj Reddy (CMU):

large scale, commercial AI systems, expert systems, robotics.

  • 2010: Leslie Valiant (Harvard): PAC learning; computational

complexity; parallel/distributed computing;

  • 2011: Judea Pearl (UCLA): probabilistic/Bayesian reasoning

and learning in AI.

Photos from the AssociaEon for CompuEng Machinery hpp://amturing.acm.org/

slide-45
SLIDE 45

A Small Step: Neural-symbolic Computation

Combines logical reasoning and neural learning: Computer Science Logic + Neural ComputaEon Neural-symbolic computaEon: Learning from experience and reasoning about what has been learned from an uncertain environment in a computaEonally efficient way. (Joint work with Artur Garcez)

slide-46
SLIDE 46

Neural-symbolic computation

MoMvaMon: there is a need for systems that:

  • Learn from changes in the environment.
  • Reason about commonsense knowledge.
  • IntegraEng reasoning (computaEon) and learning: we combine the

logical nature of reasoning and the staEsEcal nature of learning, see Valiant.

  • Reasoning is hard:

Gertrude Stein, “Autobiography of Alice B. Toklas” ... that only three +mes in my life have I met a genius ... the three geniuses of whom I wish to speak are Gertrude Stein, Pablo Picasso and Alfred Whitehead.

This is joint work with Artur Garcez and former students.

slide-47
SLIDE 47

Neural-symbolic computation

“The aim here is to idenMfy a way of looking at and manipulaMng commonsense knowledge that is consistent with and can support what we consider to be the two most fundamental aspects of intelligent cogniMve behaviour: the ability to learn from experience, and the ability to reason from what has been learned. We are therefore seeking a semanMcs of knowledge that can computaMonally support the basic phenomena of intelligent behaviour.”

  • - Leslie G. Valiant, Journal of the ACM, 2003.

Understanding the cogniEon/brain: Biggest scien+fic ques+on of 21st century?

slide-48
SLIDE 48

Neural-symbolic computation

  • Our approach is foundaEonal i.e. based on logical

formalizaEon and machine learning.

  • We use logic to represent the reasoning process.
  • We integrate learning and reasoning (see Kardon/Roth)
  • “We need some language for describing the alternaEve

algorithms that a network of neurons may be implemenEng”. (Valiant, CACM 2011) ApplicaEons:

  • training/assessment in simulators (TNO) – Leo de Penning
  • verificaEon and adaptaEon of soTware models (ICSE, ASE,

CACM 2015 – cited by Alrajeh et. al.)

  • visual intelligence, roboEcs, bioinformaEcs, semanEc

web.

slide-49
SLIDE 49

A History of Neural-Symbolic Computation

by A. d’Avila Garcez –AAAI Spring Symposium 2015 1988: P Smolensky, On the proper treatment of connecEonism, BBS:11(1); J McCarthy (commentary), Epistemological challenges for connecEonism. 1990: G Hinton, Preface to the special issue on connecEonist symbol processing, ArEficial Intelligence 46,1-4 1993: L Shastri and V Ajjanagadde: From simple associaEons to systemaEc reasoning: A connecEonist representaEon of rules, variables and dynamic bindings using temporal synchrony, BBS:16 (SHRUTI) 1994: G Towell, J Shavlik: Knowledge-Based ArEficial Neural Networks. ArEf. Intell. 70(1-2): 119-165 (KBANN) 1994: S. Hoelldobler & Y. Kalinke: Toward a New Massively Parallel ComputaEonal Model for Logic Programming; Workshop on Combining Symbolic and ConnecEonist Processing, ECAI. 1997: M Craven, J Shavlik, Understanding Time-Series Networks: A Case Study in Rule

  • ExtracEon. Int. J. Neural Syst. 8(4): 373-384

2001: A Browne, R Sun, ConnecEonist inference models. Neural Networks 14(10): 1331-1355 2002: A d'Avila Garcez, K. Broda and D Gabbay, Neural-Symbolic Learning Systems: FoundaEons and ApplicaEons, Springer. (CILP)

slide-50
SLIDE 50

A history of Neural-Symbolic Computation

2006: A d'Avila Garcez, L Lamb, A ConnecEonist ComputaEonal Model for Epistemic and Temporal Reasoning. Neural ComputaEon 18(7): 1711-1738 2007: A. d’Avila Garcez, L.C. Lamb, D.M. Gabbay, ConnecEonist Modal Logic: TheoreEcal Computer Science, 371: 34-53. 2007: S Bader, P Hitzler, S Hölldobler, A. Witzel, A Fully ConnecEonist Model Generator for Covered First-Order Logic Programs. IJCAI 2007: 666-671 2007: Y Bengio, Y LeCun, Scaling learning algorithms towards AI. Large-scale kernel machines 34(5) (RepresentaEon Learning) 2009: A. d'Avila Garcez, L. Lamb, D. Gabbay, Neural-Symbolic CogniEve

  • Reasoning. CogniEve Technologies, Springer

2010: D. Endres, P. Foldiak, U. Priss, An applicaEon of formal concept analysis to semanEc neural decoding. Annals of MathemaEcs and ArEficial Intelligence 57(3-4): 233-248 2014: M. Franca, G. Zaverucha, A d'Avila Garcez, Fast relaEonal learning using bopom clause proposiEonalizaEon with arEficial neural networks. Machine Learning 94(1): 81-104 (CILP++)

slide-51
SLIDE 51

A History of Neural-Symbolic Computation

The CILP System: Artur Garcez & Gerson Zaverucha, 1999.

THEOREM 1: For any logic program P there exists a neural network N such that N computes P. Garcez, Zaverucha, Appl. Intell J, 1999.

slide-52
SLIDE 52

Learning to reason

“Reports that say that something hasn't happened are always interesDng to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns -- the ones we don't know we don't know. And if

  • ne looks throughout the history of our country and other free

countries, it is the la`er category that tend to be the difficult

  • nes.” D. Rumsfeld, 12 Feb. 2002.

We are inspired by work from Luitzen Egbertus Jan Brouwer, Dutch logician, February 27, 1881 – December 2, 1966 (proponent of IntuiEonisEc Logics)

In order to state that something exists,

  • ne has to show it by construcMng a proof
  • f such an existence.
slide-53
SLIDE 53

Conectionist Modal Logics

Modal logic goes beyond proposiEonal reasoning: Moshe Vardi, 1997. A proposiEon is necessary (box) in a possible world (state of affairs) if it is true in all worlds which are possible in relaEon to that world. A proposiEon is possible (diamond) in a possible world if it is true in at least one world which is possible in relaEon to that same world. ModaliEes also used for reasoning about uncertainty (following J. Halpern). RelaEonal learning/reasoning is notoriously hard: AAAI workshop, March 2015.

slide-54
SLIDE 54

Learning to reason in connectionist models

  • Insight: assume that neurons are possible worlds.

ProposiEonal Modal Logic = decidable fragment of FOL with two variables. Full soluEon of Muddy Children puzzle and other testbeds.

Garcez, Lamb, Gabbay. Connectionist Modal Logic. Theoretical Computer Science, 371: 34-53, 2007. Garcez, Lamb. Connectionist Model for Epistemic and Temporal

  • Reasoning. Neural Computation, 18:1711-1738, July 2006
slide-55
SLIDE 55

Connectionist Modal Logics

slide-56
SLIDE 56

Connectionist Modal and Temporal Logics

Neural network ensembles correspond to possible worlds/states; modularity for learning; accessibility relaEons, disjuncEve informaEon. THEOREM 2: For any modal/temporal logic program P there exists an ensemble of neural networks N such that N computes P. Garcez, Lamb, 2006.

slide-57
SLIDE 57

Connectionist Modal Logics

Training and Assessment in Simulators: Learning new informaEon from observaEon of

  • experts. Leo de Penning, TNO

Trainees at task execuEon and reasoning about this informaEon online to provide feedback to the user

slide-58
SLIDE 58

Going beyond CS, again

  • Sydney Brenner (Nobel Prize Winner 2002): “Biological

research is in crisis, and in Alan Turing’s work there is much to guide us”. Nature, 461, Feb. 2012.

  • “Three of Turing’s papers are relevant to biology”.
  • “The most interesting connection with biology, in my view, is

Turing’s most important paper: ‘On computable numbers with an application to the Entscheindungsproblem’.

  • “Biologists ask only three questions of a living organism: how

does it work? How it is built? And how did it get that way? They are problems embodied in the classical fields of physiology, embryology and evolution. And at the core of everything are the tapes containing the descriptions to build these special Turing machines.”

slide-59
SLIDE 59

Social Problem-Solving: collective cognition

Goal: Modelling Social Cognition in Problem-Solving Key questions:

  • What are the main parts of group problem-solving?
  • How do social features (e.g. network, individual behavior)

affect individual and group problem-solving performance?

  • Can these models be used to automate problem-solving (i.e.

inspire novel cognitive algorithms)? Recent research results:

  • Memetic Networks; a model for how social groups exchange,

consume and transform information;

  • Mapping of a subset of network properties to problem features

where such properties help;

  • Comparison of social search models with traditional local

search techniques;

  • Experiments with human computation in social settings: Social

SAT-solvers; Social Sudoku experiments.

slide-60
SLIDE 60

Recent results

Farenzena, D.; Lamb, Luis; Araujo, Ricardo. Collaboration Emergence in Social Networks with Informational Natural Selection. 3rd IEEE International Conference on Social Computing. MIT MediaLab, 2011. Diego V. Noble; Marcelo R. Prates, Daniel S. Bossle and Luis C. Lamb. Collaboration in Social Problem-Solving: When Diversity Trumps Network

  • Efficiency. Proceedings of the 29th AAAI Conference on Artificial

Intelligence - AAAI-15. Austin, Texas. Diego Noble, Felipe Grando, Ricardo Araujo, Luis C. Lamb. The Impact of Centrality on Individual and Collective Performance in Social Problem- Solving Systems. Genetic and Evolutionary Computation Conference (GECCO-2015) ACM Press, NY, 2015.

slide-61
SLIDE 61

Afterword

  • We are constructing a principled approach to combine learning

and reasoning.

  • Reasoning is hard... as one can see around the world.
  • Relational, full predicate reasoning is undecidable: modal

logics offer a decidable, powerful alternative.

  • Scientists still don’t know how to reason and learn relations,

see deep learning + knowledge representation (AAAI Spring Symposium, Stanford, 2015).

slide-62
SLIDE 62

Afterword: AI and public perception

  • Deep learning + AI have recently impacted people’s

perception of Computer Science. A few groundbreaking results contributed to this.

  • There have been many accomplishments, but the follwing are

clearly noteworthy: (0) 1997: Deepblue beats chess world champion Kasparov. (1) 2011: Watson wins Jeopardy! (2) 2012: ImageNet Classification, by Hinton et al - groundbreaking result from deep learning in image recognition. (3) 2016: Google DeepMind’s AlphaGo beats Lee Sedol at the ancient Chinese game of Go. (4) 2017: Poker playing Texas Hold’Em at human level ability: CMU’s Libratus + U of Alberta’s DeepStack

slide-63
SLIDE 63

Afterword: Explaining AI

(1) 2012: ImageNet Classification, by Hinton et al - groundbreaking result from deep learning in image recognition.

Imagenet classificaDon with deep convoluDonal neural

  • networks. A. Krizhevsky, I. Sutskever, GE Hinton, NIPS

2012.

slide-64
SLIDE 64

Afterword

“If you want the computer to have general intelligence, the

  • uter structure has to be commonsense knowledge and

reasoning”, John McCarthy, in Shasha&Lazere, 1995. “Our field is sDll in its embryonic stage. It's great that we haven't been around for 2000 years. We are at a stage where very, very important results occur in front of our eyes.”

M.O. Rabin, in Denis Shasha and Cathy Lazere: Out of Their Minds: The Lives and Discoveries of 15 Great Computer ScienDsts, 1995.

slide-65
SLIDE 65

References

  • Artur S. d'Avila Garcez, Marco Gori, Pascal Hitzler, Luís C. Lamb:

Neural-Symbolic Learning and Reasoning (Dagstuhl Seminar 14381), 2015.

  • A. S. d'Avila Garcez, L. de Raedt, Luis. C. Lamb, R. Miikkulainen, P.

Hitzler, T. Icard, T. Besold, P. Foldiak, D. Silver and K.U.

  • Kuehnberger. Neural-Symbolic Learning and Reasoning:

Contributions and Challenges, Proceedings of the AAAI Spring Symposium on Knowledge Representation and Reasoning: Integrating Symbolic and Neural Approaches, Stanford Univ., CA, March 2015, pp. 18-21, AAAI Press, 2015

  • A.S. d’Avila Garcez, Luis C. Lamb and Dov M. Gabbay. A neural

cognitive model of argumentation with application to legal inference and decision making. Journal of Applied Logic, 2014.

slide-66
SLIDE 66

References

  • L.C. Lamb, R. Borges, A. Garcez. A Connectionist Cognitive Model

for Temporal Synchonisation and Learning. Proc. AAAI-2007.

  • L. de Penning; A.S. d'Avila Garcez, Luis C. Lamb and J.J. Ch. Meyer.

A Neural-Symbolic Cognitive Agent for Online Learning and

  • Reasoning. Proc. IJCAI-11.
  • R. V. Borges, Artur d'Avila Garcez, Luis C. Lamb. Learning and

Representing Temporal Knowledge in Recurrent Networks. IEEE T. Neural Networks, Dec. 2011.

  • R. V. Borges, Artur S. d'Avila Garcez, Luis C. Lamb and Bashar
  • Nuseibeh. Learning to Adapt Requirements Specifications of Evolving
  • Systems. In ICSE 2011.
  • A.S. d’Avila Garcez, Luis C. Lamb and Dov M. Gabbay. Neural-

Symbolic Cognitive Reasoning, Springer 2009, 198pp.

  • A.S. d’Avila Garcez, Luis C. Lamb and Dov M. Gabbay. Connectionist

Modal Logic. Theoretical Computer Science, 2007.

slide-67
SLIDE 67

References

  • Roni Khardon, Dan Roth: Learning to reason. J. ACM

44(5): 697-725 (1997).

  • Leslie G. Valiant: Knowledge Infusion: In Pursuit of

Robustness in Artificial Intelligence. FSTTCS 2008: 415-422

  • Leslie G. Valiant: Three problems in computer
  • science. J. ACM 50(1): 96-99 (2003)
slide-68
SLIDE 68

Future references

  • Dagstuhl Seminar 17192 – Human-like Neural-

Symbolic Computing, May 7 -12, 2017.

  • Please provide input and criticism.