temporal dynamics
play

Temporal Dynamics Fabricio Breve fabricio@rc.unesp.br Department - PowerPoint PPT Presentation

1st BRICS Countries Congress (BRICS-CCI) and 11th Brazilian Congress (CBIC) on Computational Intelligence Combined Active and Semi-Supervised Learning using Particle Walking Temporal Dynamics Fabricio Breve fabricio@rc.unesp.br Department


  1. 1st BRICS Countries Congress (BRICS-CCI) and 11th Brazilian Congress (CBIC) on Computational Intelligence Combined Active and Semi-Supervised Learning using Particle Walking Temporal Dynamics Fabricio Breve fabricio@rc.unesp.br Department of Statistics, Applied Mathematics and Computation (DEMAC), Institute of Geosciences and Exact Sciences (IGCE), São Paulo State University (UNESP), Rio Claro, SP, Brazil

  2. Outline  Active Learning and Semi-Supervised Learning  The Proposed Method  Computer Simulations  Conclusions

  3. Active Learning  Learner is able to interactively query an human specialist (or some other information source) to obtain the labels of selected data points  Key idea: greater accuracy with fewer labeled data points [4] B. Settles, “Active learning,” Synthesis Lectures on Artificial Intelligence and Machine Learning , vol. 6, no. 1, pp. 1 – 114, 2012. [5] F. Olsson, “A literature survey of active machine learning in the context of natural language processing,” Swedish Institute of Computer Science, Box 1263, SE-164 29 Kista, Sweden, Tech. Rep. T2009:06, April 2009.

  4. Semi-Supervised Learning  Learns from both labeled and unlabeled data items.  Focus on problems where there are lots of easily acquired unlabeled data, but the labeling process is expensive, time consuming, and often requiring the work of human specialists. [1] X. Zhu, “Semi - supervised learning literature survey,” Computer Sciences, University of Wisconsin-Madison, Tech. Rep. 1530, 2005. [2] O. Chapelle, B. Schölkopf, and A. Zien, Eds., Semi-Supervised Learning , ser. Adaptive Computation and Machine Learning. Cambridge, MA: The MIT Press, 2006. [3] S. Abney, Semisupervised Learning for Computational Linguistics . CRC Press, 2008.

  5. Semi-Supervised Learning and Active Learning comparison Semi-Supervised Learning Active Learning  Exploits what the learner  Attempt to explore thinks it knows about the unknown aspects of the unlabeled data data  Most confident labeled  Less confident labeled data used to retrain data have their labels algorithm ( self-learning queried ( uncertainty sampling methods ) methods )  Relies on committee  Query according to agreements ( co-training committee disagreements methods ) ( query by committee methods ) [4] B. Settles, “Active learning,” Synthesis Lectures on Artificial Intelligence and Machine Learning , vol. 6, no. 1, pp. 1 – 114, 2012. [5] F. Olsson, “A literature survey of active machine learning in the context of natural language processing,” Swedish Institute of Computer Science, Box 1263, SE-164 29 Kista, Sweden, Tech. Rep. T2009:06, April 2009.

  6. Proposed Method  Semi-Supervised Learning and Active Learning combined into a new nature-inspired method  Particles competition and cooperation in networks combined into an unique schema  Cooperation:  Particles from the same class (team) walk in the network cooperatively, propagating their labels.  Goal : Dominate as many nodes as possible.  Competition:  Particles from different classes (teams) compete against each other  Goal : Avoid invasion by other class particles in their territory

  7. Initial Configuration  An undirected network is generated from data by connecting each node to its 𝑙 - nearest neighbors  A particle is generated for each labeled node of the network  Particles initial position are set to their corresponding nodes  Particles with same label play 4 for the same team

  8. Initial Configuration 1 0,8 0,6 0,4 0,2  Nodes have a domination 0 Ex: [ 0.00 1.00 0.00 0.00 ] vector (4 classes, node labeled as class B)  Labeled nodes have ownership set to their 1 0,8 respective teams (classes). 0,6 0,4 0,2  Unlabeled nodes have levels 0 Ex: [ 0.25 0.25 0.25 0.25 ] set equally for each team (4 classes, unlabeled node) 1 if 𝑧 𝑗 = ℓ 𝜕 ℓ = 0 if 𝑧 𝑗 ≠ ℓ e 𝑧 𝑗 ∈ 𝑀 𝑤 𝑗 1 𝑑 if 𝑧 𝑗 = ∅

  9. Node Dynamics  When a particle selects a 1 neighbor to visit: 𝑢  It decreases the domination 0 level of the other teams 1 𝑢 + 1  It increases the domination level of its own team 0  Exception: labeled nodes domination levels are fixed 𝜕 𝑢 0.1 𝜍 𝑘 𝜕 ℓ 𝑢 − 𝑔 max 0, 𝑤 𝑗 se ℓ ≠ 𝜍 𝑘 𝑑 − 1 𝜕 ℓ 𝑢 + 1 = 𝑤 𝑗 𝜕 ℓ 𝑢 + 𝜕 𝑠 𝑢 − 𝑤 𝑗 𝜕 𝑠 𝑢 + 1 𝑔 𝑤 𝑗 𝑤 𝑗 se ℓ = 𝜍 𝑘 𝑠≠ℓ

  10. Particle Dynamics 0.6  A particle gets: 0.2 0.1 0.1  Stronger when it selects a node being dominated by 0 0,5 1 0 0,5 1 its own team  Weaker when it 0.4 0.3 selects a node 0.2 0.1 being dominated by another team 𝜕 𝑢 = 𝑤 𝑗 𝜕 ℓ 𝑢 𝜍 𝑘 0 0,5 1 0 0,5 1

  11. Distance Table 0  Each particle has its distance table. 1  Keep the particle aware of how far it is from the closest labeled node of 1 its team (class).  Prevents the particle from losing all its strength when walking into 4 2 2 enemies neighborhoods.  Keeps the particle around to protect 3 its own neighborhood. 3  Updated dynamically with local information. 4 ? 4  No prior calculation. 𝑒 𝑗 𝑢 + 1 𝑒 𝑗 𝑢 + 1 < 𝜍 𝑘 𝑒 𝑙 𝑢 𝜍 𝑘 se 𝜍 𝑘 𝑒 𝑙 𝑢 + 1 = 𝜍 𝑘 𝑒 𝑙 𝑢 𝜍 𝑘 otherwise

  12. Particles Walk  Random-greedy walk  Each particles randomly chooses a neighbor to visit at each iteration  Probabilities of being chosen are higher to neighbors which are:  Already dominated by the particle team.  Closer to particle initial node. 𝑒 𝑗 −2 𝜕 ℓ 1 + 𝜍 𝑘 𝑋 𝑟𝑗 𝑤 𝑗 𝑋 𝑟𝑗 𝑞 𝑤 𝑗 |𝜍 𝑘 = + 𝑜 𝑒 𝜈 −2 2 𝜈=1 𝑋 𝜕 ℓ 1 + 𝜍 𝑘 𝑟𝜈 𝑜 2 𝜈=1 𝑋 𝑟𝜈 𝑤 𝜈

  13. 0.6 Moving Probabilities 0.2 0.1 0.1 𝑤 2 𝑤 2 𝑤 4 0.4 34% 0.3 40% 0.2 0.1 𝑤 1 𝑤 3 26% 0.8 𝑤 3 0.1 0.1 0.0 𝑤 4

  14. Particles Walk 0.6 0.4  Shocks  A particle really visits the selected node only if the domination level of 0.7 its team is higher than 0.3 others;  Otherwise, a shock happens and the particle stays at the current node until next iteration.

  15. Label Query  When the nodes domination levels reach a fair level of stability, the system chooses a unlabeled node and queries its label.  A new particle is created to this new labeled node.  The iterations resume until stability is reached again, then a new node will be chosen.  The process is repeated until the defined amount of labeled nodes is reached.

  16. Query Rule  Two versions of the algorithm:  ASL-PCC A  ASL-PCC B  They use different rules to select which node will be queried.

  17. ASL-PCC A 𝑟 𝑢 = arg max 𝑗,𝑧=∅ 𝑣 𝑗 (𝑢)  Uses temporal node domination information 𝜇 ℓ∗∗ (𝑢) to select the unlabeled 𝑣 𝑗 𝑢 = 𝑤 𝑗 node which had more 𝜇 ℓ∗ (𝑢) 𝑤 𝑗 dispute over time. 𝜇 ℓ∗ 𝑢 = arg max  The node the algorithm 𝜇 ℓ (𝑢) 𝑤 𝑗 𝑤 𝑗 has less confidence on ℓ the label it is currently 𝜇 ℓ∗∗ 𝑢 = arg 𝜇 ℓ (𝑢) assigning. 𝑤 𝑗 max 𝑤 𝑗 𝜇ℓ∗ 𝑢 ℓ,ℓ≠𝑤 𝑗

  18. AL-PCC B  Chooses the 𝑒 𝑗 (𝑢) 𝑡 𝑗 𝑢 = min 𝜍 𝑘 unlabeled node 𝑘 which is currently more far away from any labeled node. 𝑟 𝑢 = arg max 𝑗,𝑧=∅ 𝑡 𝑗 (𝑢)  According to particles dynamic distance tables.

  19. Computer Simulations  Original PCC method  1% to 10% labeled nodes are randomly chosen.  ASL-PCC A and ASL-PCC B  Only 1 labeled node from each class is randomly chosen.  New query each time the system stabilizes.  Until it reaches 1% to 10% of labeled nodes.

  20. Correct classification rate comparison when the methods are applied to the Iris data set with different amounts of labeled nodes.

  21. Correct classification rate comparison when the methods are applied to the Wine data set with different amounts of labeled nodes.

  22. Correct classification rate comparison when the methods are applied to the Digit1 data set with different amounts of labeled nodes.

  23. Correct classification rate comparison when the methods are applied to the USPS data set with different amounts of labeled nodes.

  24. Correct classification rate comparison when the methods are applied to the COIL 2 data set with different amounts of labeled nodes.

  25. Correct classification rate comparison when the methods are applied to the BCI data set with different amounts of labeled nodes.

  26. Correct classification rate comparison when the methods are applied to the g241c data set with different amounts of labeled nodes.

  27. Correct classification rate comparison when the methods are applied to the COIL data set with different amounts of labeled nodes.

  28. Conclusions  Semi-supervised learning and active learning features combined into a single approach  Inspired on the collective behavior of social animals  Protect their territories against intruding groups.  No Retraining  New particles are created on the fly as unlabeled nodes become labeled nodes.  The algorithm naturally adapts itself to new situations.  Only nodes affected by the new particles are updated  Equilibrium state is quickly reached again  Saves execution time.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend