SLIDE 8 3/18/2020 8
Covid-NN in Tensorflow
# training inputs: days from Feb 22 to Mar 12 train_X = numpy.asarray([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]) # reshape the array size t i X t i X h (19 1)
- Step 2: define the (small) training set
train_X = train_X.reshape(19,1) # training outputs: infected people train_Y = numpy.asarray([76, 148, 222, 311, 385, 588, 821, 1049, 1577, 1835, 2263, 2706, 3296, 3916, 5061, 7375, 8878, 10149, 12462]) # reshape the array size train_Y = train_Y.reshape(19,1)
Covid-NN in Tensorflow
# Network Parameters # number of neurons 1st hidden layer n_hidden_1 = 64 # b f 2 d hidd l
- Step 3: define the network parameters
# number of neurons 2nd hidden layer n_hidden_2 = 64 # days size_input = 1 # number of cases size_output = 1
Covid-NN in Tensorflow
# placeholders X = tf.placeholder(tf.float32, [None, size_input]) Y = tf.placeholder(tf.float32, [None, size output])
- Step 4: define the inputs (placeholders)
p ( , [ , _ p ])
Covid-NN in Tensorflow
# weights and biases weights = { 'h1': tf.Variable(tf.random_normal([size_input, n_hidden_1])), 'h2': tf.Variable(tf.random normal([n hidden 1, n hidden 2])),
- Step 5: define weights and biases
Python Dictionary ( _ ([ _ _ , _ _ ])), 'out': tf.Variable(tf.random_normal([n_hidden_2, size_output])) } biases = { 'b1': tf.Variable(tf.random_normal([n_hidden_1])), 'b2': tf.Variable(tf.random_normal([n_hidden_2])), 'out': tf.Variable(tf.random_normal([size_output])) } Python Dictionary
Covid-NN in Tensorflow
# hidden fully connected layer layer_1 = tf.add(tf.matmul(X, weights['h1']), biases['b1']) # hidden fully connected layer layer 2 = tf.add(tf.matmul(layer 1, weights['h2']), biases['b2'])
- Step 6: build the network
y _ ( ( y _ , g [ ]), [ ]) # output layer
- ut_layer = tf.add(tf.matmul(layer_2, weights['out']), biases['out'])
Covid-NN in Tensorflow
# loss and optimizer fun = tf.nn.softmax_cross_entropy_with_logits(logits=out_layer, labels=Y) loss_op = tf.reduce_mean(fun)
- Step 7: initializations
- ptimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op) # variable initializer init = tf.global_variables_initializer()