projects using deep learning frameworks
play

Projects using deep learning frameworks These projects consist in - PDF document

Projects in C These projects consist in the development of a neural network simulator in C language. Then, the simulator has be used within a larger application, addressing a specific problem (e.g. classification, pattern recognition, detection,


  1. Projects in C These projects consist in the development of a neural network simulator in C language. Then, the simulator has be used within a larger application, addressing a specific problem (e.g. classification, pattern recognition, detection, optimization, control, etc.). The program must include a graphic interface to allow the user to interact with the simulation, change hyper-parameters, give commands, display the evolution of the system, and load/save data sets and network parameters. Hopfield networks 1. Digit associator . Simulate a Hopfield network that stores 10 images of the digits from 0 to 9, represented by binary images of 32x32 pixels. Make a graphic interface that allows the user to define the training set by drawing the digits through the mouse, save/load the training set, select the inputs to the network, display the output and the energy during evolution, add noise to the images, and input new test images with the mouse. Multilayer shallow networks 2. Function approximator . Simulate a feedforward network that approximate a function from a set of sample points TS = {(x i , y i ) i = 1, …, M} defined by the user through the mouse. The network consists of N i input neurons, N h hidden neurons, and 1 output neuron. Before giving as input to the network, each input coordinate x i is first converted into N i values in [0,1]. Then, the y i coordinate is used as a target value for training. To see what the network is learning, at each training step, compute the output corresponding to all input coordinates on the x axis (with a given resolution) and visualize the corresponding points on the Cartesian plane. See: https://www.youtube.com/watch?v=y46O28b8AYE 3. Two-input visualizer . Simulate a feedforward network with two input neurons, N hidden neurons (set by the user), and 1 output neuron. Training samples are given by a set of coordinates defined by the user through the mouse on a given area of the screen. To see what the network is learning, for each epoch, visualize the output of each neuron using a color map, where each coordinate of the input space (with a given resolution) is painted on the screen with a color proportional to the output value. See: https://playground.tensorflow.org/ 4. Digit classifier . Simulate a 3-layer neural network trained on the MNIST data set that reads 28x28 input images of digits (from 0 to 9) generated by the mouse and outputs 10 classes in a softmax layer. See the following demo: http://macheads101.com/demos/handwriting/?c=neuralnet Kohonen Networks 5. Two-input map . Simulate a Kohonen network with two input neurons and 100 output neurons organized as a 10x10 bidimensional map. Design the graphic interface so that the user can provide input data set coordinates with different spatial distribution, visualizing the weights and their neighborhood during training. 6. Image2map . Simulate a Kohonen network with 16x16 input neurons and 12 output neurons organized as a one-dimensional ring. Train the network using binary images of 12 bars with different orientation. Visualize the weights of each input neuron as a 16x16 color coded matrix. 7. Optimizer . Simulate a Kohonen network (2 inputs and 40 outputs) to solve the Traveling Salesman optimization problem, where the training set consists of 20 points representing the locations of cities to be visited in a sequence that minimizes the total length of the path. The output map has to be organized as a 1D-ring and the values of the 40 weights (connected to their closest neighbors) will provide the solution. 1

  2. Reinforcement Learning 8. Grid world . Simulate a Q-learning algorithm to train an agent to move in a grid world, consisting of N rooms (states), where there is a goal state with reward +10 and a few obstacles (states with negative reward equal to -10). The agent can make 4 actions (North, East, South, West) and receives a negative reward (-1) for each move and a negative reward (-2) every time it hits the wall. 9. Crawling robot . Use Q-learning to train an agent to control a crawling robot that has to learn how to move a 2-link finger to move forward. A positive reward is generated when moving forward and a negative reward is generated when moving backward or not moving. 10. Inverted pendulum . Use Q-learning to train an agent to control an inverted pendulum. A negative reward is generated when the pole exceeds a given angle (e.g. +/-12 degrees) or the cart exceeds a given position on the x-axis. 11. Ball and beam . Use Q-learning to train an agent to control a ball-and-beam device. A positive reward is generated when the ball stays at the center of the beam for a minimum number of steps. A negative reward is generated when the ball hits one of the beam limits. 12. Video tracking . Use Q-learning to train an agent to control a pan-tilt camera to track a moving object at the center of its visual field. A positive reward is generated when the object centroid stays at the center of the visual field for a number of steps. A negative reward proportional to the distance to the center is generated when the object centroid moves far away from the center. 13. Juggling robot . Use Q-learning to train an agent to control the horizontal position of a paddle to catch a ball falling from top and bouncing on it (assume zero energy loss). A positive reward is generated when the paddle hits the ball. A negative reward is generated when the paddle misses the ball. 14. Autonomous driving . Use Q-learning to train an agent to control a car (steering and speed) moving on a road. Sensors consists of 5 (or more) 1D-lidars pointing in different directions. A negative reward is generated when the car hits an obstacle (i.e., one of the lidars gives zero distance). A positive reward is generated when the car maintains the center of the road for a given number of steps. 15. Self parking . Use Q-learning to train an agent to park a car into a parking slot. A negative reward is generated when the car hits an obstacle (i.e. the cars besides the parking slot or the internal wall). A positive reward is generated when the car is correctly positioned into the parking slot. Projects using deep learning frameworks These projects consist in the development of a deep neural network using one the available frameworks (e.g., TensorFlow, Keras, Caffe, etc.). Then, the network has be used within a larger application, addressing a specific problem (e.g. classification, pattern recognition, detection, optimization, control, etc.). The program must include a graphic interface to allow the user to interact with the simulation, change hyper-parameters, give commands, display the evolution of the system, and load/save data sets and network parameters. Image classification and detection 16. Face landmarks . Train a deep convolutional network for detection to identify a set of landmarks on a human face. See: https://www.youtube.com/watch?v=LbsbMo3X_hU 17. Eye tracking . Train a deep convolutional network for detection to track the eyes and recognize open and close eyelids. See: https://www.youtube.com/watch?v=ObUCmhmjt-c 2

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend