博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
DeepLearning.ai作业:(2-3)-- 超参数调试(Hyperparameter tuning)
阅读量:4100 次
发布时间:2019-05-25

本文共 13575 字,大约阅读时间需要 45 分钟。


title: ‘DeepLearning.ai作业:(2-3)-- 超参数调试(Hyperparameter tuning)’

id: 2018091810
tags:

  • homework
    categories:
  • AI
  • Deep Learning
    date: 2018-09-18 10:35:32

  1. 不要抄作业!
  2. 我只是把思路整理了,供个人学习。
  3. 不要抄作业!

首发于个人博客:,欢迎来访

本周主要是TensorFlow的简单教程,没什么好说的,可以去看看更详细一点的教程。

# GRADED FUNCTION: linear_functiondef linear_function():    """    Implements a linear function:             Initializes W to be a random tensor of shape (4,3)            Initializes X to be a random tensor of shape (3,1)            Initializes b to be a random tensor of shape (4,1)    Returns:     result -- runs the session for Y = WX + b     """        np.random.seed(1)        ### START CODE HERE ### (4 lines of code)    X = tf.constant(np.random.randn(3,1), name = "X")    W = tf.constant(np.random.randn(4,3), name = "W")    b = tf.constant(np.random.randn(4,1), name = "b")    Y = tf.matmul(W,X) + b    ### END CODE HERE ###         # Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate        ### START CODE HERE ###    sess = tf.Session()    result = sess.run(Y)    ### END CODE HERE ###         # close the session     sess.close()    return result
# GRADED FUNCTION: sigmoiddef sigmoid(z):    """    Computes the sigmoid of z        Arguments:    z -- input value, scalar or vector        Returns:     results -- the sigmoid of z    """        ### START CODE HERE ### ( approx. 4 lines of code)    # Create a placeholder for x. Name it 'x'.    x = tf.placeholder(tf.float32,name="x")    # compute sigmoid(x)    sigmoid = tf.sigmoid(x)    # Create a session, and run it. Please use the method 2 explained above.     # You should use a feed_dict to pass z's value to x.     with tf.Session() as sess:        # Run session and call the output "result"        result = sess.run(sigmoid,feed_dict={
x:z}) ### END CODE HERE ### return result
# GRADED FUNCTION: costdef cost(logits, labels):    """    Computes the cost using the sigmoid cross entropy        Arguments:    logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)    labels -- vector of labels y (1 or 0)         Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels"     in the TensorFlow documentation. So logits will feed into z, and labels into y.         Returns:    cost -- runs the session of the cost (formula (2))    """        ### START CODE HERE ###         # Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines)    z = tf.placeholder(tf.float32,name="z")    y = tf.placeholder(tf.float32,name="y")        # Use the loss function (approx. 1 line)    cost = tf.nn.sigmoid_cross_entropy_with_logits(logits=z,labels=y)        # Create a session (approx. 1 line). See method 1 above.    sess = tf.Session()        # Run the session (approx. 1 line).    cost = sess.run(cost,feed_dict={
z:logits,y:labels}) # Close the session (approx. 1 line). See method 1 above. sess.close() ### END CODE HERE ### return cost
# GRADED FUNCTION: one_hot_matrixdef one_hot_matrix(labels, C):    """    Creates a matrix where the i-th row corresponds to the ith class number and the jth column                     corresponds to the jth training example. So if example j had a label i. Then entry (i,j)                      will be 1.                          Arguments:    labels -- vector containing the labels     C -- number of classes, the depth of the one hot dimension        Returns:     one_hot -- one hot matrix    """        ### START CODE HERE ###        # Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line)    C = tf.constant(C)        # Use tf.one_hot, be careful with the axis (approx. 1 line)    one_hot_matrix = tf.one_hot(labels, C, axis = 0)        # Create the session (approx. 1 line)    sess = tf.Session()        # Run the session (approx. 1 line)    one_hot = sess.run(one_hot_matrix)        # Close the session (approx. 1 line). See method 1 above.    sess.close()        ### END CODE HERE ###        return one_hot
# GRADED FUNCTION: onesdef ones(shape):    """    Creates an array of ones of dimension shape        Arguments:    shape -- shape of the array you want to create            Returns:     ones -- array containing only ones    """        ### START CODE HERE ###        # Create "ones" tensor using tf.ones(...). (approx. 1 line)    ones = tf.ones(shape)        # Create the session (approx. 1 line)    sess = tf.Session()        # Run the session to compute 'ones' (approx. 1 line)    ones = sess.run(ones)        # Close the session (approx. 1 line). See method 1 above.    sess.close()        ### END CODE HERE ###    return ones

Building neural network

# GRADED FUNCTION: create_placeholdersdef create_placeholders(n_x, n_y):    """    Creates the placeholders for the tensorflow session.        Arguments:    n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288)    n_y -- scalar, number of classes (from 0 to 5, so -> 6)        Returns:    X -- placeholder for the data input, of shape [n_x, None] and dtype "float"    Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float"        Tips:    - You will use None because it let's us be flexible on the number of examples you will for the placeholders.      In fact, the number of examples during test/train is different.    """    ### START CODE HERE ### (approx. 2 lines)    X = tf.placeholder(tf.float32,[n_x,None])    Y = tf.placeholder(tf.float32,[n_y,None])    ### END CODE HERE ###        return X, Y
# GRADED FUNCTION: initialize_parametersdef initialize_parameters():    """    Initializes parameters to build a neural network with tensorflow. The shapes are:                        W1 : [25, 12288]                        b1 : [25, 1]                        W2 : [12, 25]                        b2 : [12, 1]                        W3 : [6, 12]                        b3 : [6, 1]        Returns:    parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3    """        tf.set_random_seed(1)                   # so that your "random" numbers match ours            ### START CODE HERE ### (approx. 6 lines of code)    W1 =  tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))    b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())    W2 = tf.get_variable("W2", [12,25], initializer = tf.contrib.layers.xavier_initializer(seed = 1))    b2 = tf.get_variable("b2", [12,1], initializer = tf.zeros_initializer())    W3 = tf.get_variable("W3", [6,12], initializer = tf.contrib.layers.xavier_initializer(seed = 1))    b3 = tf.get_variable("b3", [6,1], initializer = tf.zeros_initializer())    ### END CODE HERE ###    parameters = {
"W1": W1, "b1": b1, "W2": W2, "b2": b2, "W3": W3, "b3": b3} return parameters
# GRADED FUNCTION: forward_propagationdef forward_propagation(X, parameters):    """    Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX        Arguments:    X -- input dataset placeholder, of shape (input size, number of examples)    parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"                  the shapes are given in initialize_parameters    Returns:    Z3 -- the output of the last LINEAR unit    """        # Retrieve the parameters from the dictionary "parameters"     W1 = parameters['W1']    b1 = parameters['b1']    W2 = parameters['W2']    b2 = parameters['b2']    W3 = parameters['W3']    b3 = parameters['b3']        ### START CODE HERE ### (approx. 5 lines)              # Numpy Equivalents:    Z1 = tf.matmul(W1,X) + b1                                              # Z1 = np.dot(W1, X) + b1    A1 = tf.nn.relu(Z1)                                              # A1 = relu(Z1)    Z2 = tf.matmul(W2,A1) + b2                                              # Z2 = np.dot(W2, a1) + b2    A2 = tf.nn.relu(Z2)                                              # A2 = relu(Z2)    Z3 = tf.matmul(W3,A2) + b3                                              # Z3 = np.dot(W3,Z2) + b3    ### END CODE HERE ###        return Z3
# GRADED FUNCTION: compute_cost def compute_cost(Z3, Y):    """    Computes the cost        Arguments:    Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)    Y -- "true" labels vector placeholder, same shape as Z3        Returns:    cost - Tensor of the cost function    """        # to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...)    logits = tf.transpose(Z3)    labels = tf.transpose(Y)        ### START CODE HERE ### (1 line of code)    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits =logits, labels = labels))    ### END CODE HERE ###        return cost
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,          num_epochs = 1500, minibatch_size = 32, print_cost = True):    """    Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.        Arguments:    X_train -- training set, of shape (input size = 12288, number of training examples = 1080)    Y_train -- test set, of shape (output size = 6, number of training examples = 1080)    X_test -- training set, of shape (input size = 12288, number of training examples = 120)    Y_test -- test set, of shape (output size = 6, number of test examples = 120)    learning_rate -- learning rate of the optimization    num_epochs -- number of epochs of the optimization loop    minibatch_size -- size of a minibatch    print_cost -- True to print the cost every 100 epochs        Returns:    parameters -- parameters learnt by the model. They can then be used to predict.    """        ops.reset_default_graph()                         # to be able to rerun the model without overwriting tf variables    tf.set_random_seed(1)                             # to keep consistent results    seed = 3                                          # to keep consistent results    (n_x, m) = X_train.shape                          # (n_x: input size, m : number of examples in the train set)    n_y = Y_train.shape[0]                            # n_y : output size    costs = []                                        # To keep track of the cost        # Create Placeholders of shape (n_x, n_y)    ### START CODE HERE ### (1 line)    X, Y = create_placeholders(n_x,n_y)    ### END CODE HERE ###    # Initialize parameters    ### START CODE HERE ### (1 line)    parameters = initialize_parameters()    ### END CODE HERE ###        # Forward propagation: Build the forward propagation in the tensorflow graph    ### START CODE HERE ### (1 line)    Z3 = forward_propagation(X, parameters)    ### END CODE HERE ###        # Cost function: Add cost function to tensorflow graph    ### START CODE HERE ### (1 line)    cost = compute_cost(Z3, Y)    ### END CODE HERE ###        # Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.    ### START CODE HERE ### (1 line)    optimizer = optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)    ### END CODE HERE ###        # Initialize all the variables    init = tf.global_variables_initializer()    # Start the session to compute the tensorflow graph    with tf.Session() as sess:                # Run the initialization        sess.run(init)                # Do the training loop        for epoch in range(num_epochs):            epoch_cost = 0.                       # Defines a cost related to an epoch            num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set            seed = seed + 1            minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)            for minibatch in minibatches:                # Select a minibatch                (minibatch_X, minibatch_Y) = minibatch                                # IMPORTANT: The line that runs the graph on a minibatch.                # Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y).                ### START CODE HERE ### (1 line)                _ , minibatch_cost = sess.run([optimizer, cost], feed_dict = {
X: minibatch_X, Y: minibatch_Y}) ### END CODE HERE ### epoch_cost += minibatch_cost / num_minibatches # Print the cost every epoch if print_cost == True and epoch % 100 == 0: print ("Cost after epoch %i: %f" % (epoch, epoch_cost)) if print_cost == True and epoch % 5 == 0: costs.append(epoch_cost) # plot the cost plt.plot(np.squeeze(costs)) plt.ylabel('cost') plt.xlabel('iterations (per tens)') plt.title("Learning rate =" + str(learning_rate)) plt.show() # lets save the parameters in a variable parameters = sess.run(parameters) print ("Parameters have been trained!") # Calculate the correct predictions correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y)) # Calculate accuracy on the test set accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print ("Train Accuracy:", accuracy.eval({
X: X_train, Y: Y_train})) print ("Test Accuracy:", accuracy.eval({
X: X_test, Y: Y_test})) return parameters

转载地址:http://oerii.baihongyu.com/

你可能感兴趣的文章
前端设计之CSS布局:上中下三栏自适应高度CSS布局
查看>>
Java的时间操作玩法实例若干
查看>>
JavaScript:时间日期格式验证大全
查看>>
责任链模式 Chain of Responsibility
查看>>
高并发与大数据解决方案概述
查看>>
解决SimpleDateFormat线程安全问题NumberFormatException: multiple points
查看>>
MySQL数据库存储引擎简介
查看>>
处理Maven本地仓库.lastUpdated文件
查看>>
CentOS7,玩转samba服务,基于身份验证的共享
查看>>
计算机网络-网络协议模型
查看>>
计算机网络-OSI各层概述
查看>>
Java--String/StringBuffer/StringBuilder区别
查看>>
分布式之redis复习精讲
查看>>
数据结构与算法7-栈
查看>>
Java并发编程 | 一不小心就死锁了,怎么办?
查看>>
(python版)《剑指Offer》JZ01:二维数组中的查找
查看>>
(python版)《剑指Offer》JZ06:旋转数组的最小数字
查看>>
(python版)《剑指Offer》JZ13:调整数组顺序使奇数位于偶数前面
查看>>
(python版)《剑指Offer》JZ28:数组中出现次数超过一半的数字
查看>>
(python版)《剑指Offer》JZ30:连续子数组的最大和
查看>>