2017-05-06 107 views
1

我有一個非常常見的用途,即凍結Inception的底層並僅訓練前兩層,之後我降低學習速率並微調整個初始模型。如何從Inception-3檢查點恢復具有不同可訓練變量的訓練

這裏是我跑第一部分

train_dir='/home/ubuntu/pynb/TF play/log-inceptionv3flowers' 
with tf.Graph().as_default(): 
    tf.logging.set_verbosity(tf.logging.INFO) 

    dataset = get_dataset() 
    images, _, labels = load_batch(dataset, batch_size=32) 

    # Create the model, use the default arg scope to configure the batch norm parameters. 
    with slim.arg_scope(inception.inception_v3_arg_scope()): 
     logits, _ = inception.inception_v3(images, num_classes=5, is_training=True) 

    # Specify the loss function: 
    one_hot_labels = slim.one_hot_encoding(labels, 5) 
    tf.losses.softmax_cross_entropy(one_hot_labels, logits) 
    total_loss = tf.losses.get_total_loss() 

    # Create some summaries to visualize the training process: 
    tf.summary.scalar('losses/Total Loss', total_loss) 

    # Specify the optimizer and create the train op: 
    optimizer = tf.train.RMSPropOptimizer(0.001, 0.9, 
            momentum=0.9, epsilon=1.0) 
    train_op = slim.learning.create_train_op(total_loss, optimizer, variables_to_train=get_variables_to_train()) 

    # Run the training: 
    final_loss = slim.learning.train(
     train_op, 
     logdir=train_dir, 
     init_fn=get_init_fn(), 
     number_of_steps=4500, 
     save_summaries_secs=30, 
     save_interval_secs=30, 
     session_config=tf.ConfigProto(gpu_options=gpu_options)) 

print('Finished training. Last batch loss %f' % final_loss) 

其正常運行,然後我的代碼運行的第二部分

train_dir='/home/ubuntu/pynb/TF play/log-inceptionv3flowers' 
with tf.Graph().as_default(): 
    tf.logging.set_verbosity(tf.logging.INFO) 

    dataset = get_dataset() 
    images, _, labels = load_batch(dataset, batch_size=32) 

    # Create the model, use the default arg scope to configure the batch norm parameters. 
    with slim.arg_scope(inception.inception_v3_arg_scope()): 
     logits, _ = inception.inception_v3(images, num_classes=5, is_training=True) 

    # Specify the loss function: 
    one_hot_labels = slim.one_hot_encoding(labels, 5) 
    tf.losses.softmax_cross_entropy(one_hot_labels, logits) 
    total_loss = tf.losses.get_total_loss() 
    # Create some summaries to visualize the training process: 
    tf.summary.scalar('losses/Total Loss', total_loss) 

    # Specify the optimizer and create the train op: 
    optimizer = tf.train.RMSPropOptimizer(0.0001, 0.9, 
            momentum=0.9, epsilon=1.0) 
    train_op = slim.learning.create_train_op(total_loss, optimizer) 

    # Run the training: 
    final_loss = slim.learning.train(
     train_op, 
     logdir=train_dir, 
     init_fn=get_init_fn(), 
     number_of_steps=10000, 
     save_summaries_secs=30, 
     save_interval_secs=30, 
     session_config=tf.ConfigProto(gpu_options=gpu_options)) 

print('Finished training. Last batch loss %f' % final_loss) 

注意的是,在第二部分中,我沒有通過代碼任何東西變成create_train_opvariables_to_train參數。那麼這個錯誤顯示

NotFoundError (see above for traceback): Key InceptionV3/Conv2d_4a_3x3/BatchNorm/beta/RMSProp not found in checkpoint 
    [[Node: save_1/RestoreV2_49 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save_1/Const_0, save_1/RestoreV2_49/tensor_names, save_1/RestoreV2_49/shape_and_slices)]] 
    [[Node: save_1/Assign_774/_1550 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_2911_save_1/Assign_774", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]()]] 

我懷疑它在尋找的InceptionV3/Conv2d_4a_3x3層,這是不存在的RMSProp變量,因爲我沒有在以前的檢查站列車層。我不知道如何實現我想要的功能,因爲在文檔中沒有關於如何執行此操作的示例。

回答

相關問題