2017-06-14 74 views
1

訓練數據集時。我收到以下錯誤:Caffe:訓練時空文件異常

I0614 19:07:11.271327 30865 layer_factory.hpp:77] Creating layer data 
I0614 19:07:11.271596 30865 net.cpp:84] Creating Layer data 
I0614 19:07:11.271848 30865 net.cpp:380] data -> data 
I0614 19:07:11.271896 30865 net.cpp:380] data -> label 
I0614 19:07:11.271941 30865 data_transformer.cpp:25] Loading mean file from: train_mean 
I0614 19:07:11.275465 30865 image_data_layer.cpp:38] Opening file 
F0614 19:07:11.275923 30865 image_data_layer.cpp:49] Check failed: !lines_.empty() File is empty 
*** Check failure stack trace: *** 
    @  0x7fba518d25cd google::LogMessage::Fail() 
    @  0x7fba518d4433 google::LogMessage::SendToLog() 
    @  0x7fba518d215b google::LogMessage::Flush() 
    @  0x7fba518d4e1e google::LogMessageFatal::~LogMessageFatal() 
    @  0x7fba51ce9509 caffe::ImageDataLayer<>::DataLayerSetUp() 
    @  0x7fba51d1f62e caffe::BasePrefetchingDataLayer<>::LayerSetUp() 
    @  0x7fba51de7897 caffe::Net<>::Init() 
    @  0x7fba51de9fde caffe::Net<>::Net() 
    @  0x7fba51df24e5 caffe::Solver<>::InitTrainNet() 
    @  0x7fba51df3925 caffe::Solver<>::Init() 
    @  0x7fba51df3c4f caffe::Solver<>::Solver() 
    @  0x7fba51dc8bb1 caffe::Creator_SGDSolver<>() 
    @   0x40a4b8 train() 
    @   0x406fa0 main 
    @  0x7fba50843830 __libc_start_main 
    @   0x4077c9 _start 
    @    (nil) (unknown) 
Aborted (core dumped) 

我使用來自Caffe的GitHub庫模板安裝後。

我在Caffe根目錄下創建了一個名爲playground的子目錄。

我附加完整的文件夾以獲得可重複性。 GitHub Link

,我已成功執行的命令:

../build/tools/convert_imageset -resize_height 256 -resize_width 256 train_raw_img/ train_files.txt train_lmdb 
../build/tools/convert_imageset -resize_height 256 -resize_width 256 test_raw_img/ test_files.txt test_lmdb 
../build/tools/compute_image_mean train_lmdb train_mean 
../build/tools/compute_image_mean train_lmdb test_mean 

然而,當我繼續訓練網絡我收到上述錯誤:

../build/tools/caffe train --solver=my_solver_val.prototxt 

完成日誌錯誤的:

I0614 19:32:54.634418 31048 caffe.cpp:211] Use CPU. 
I0614 19:32:54.635144 31048 solver.cpp:44] Initializing solver from parameters: 
test_iter: 1000 
test_interval: 1000 
base_lr: 0.01 
display: 20 
max_iter: 50000 
lr_policy: "step" 
gamma: 0.1 
momentum: 0.9 
weight_decay: 0.0005 
stepsize: 10000 
snapshot: 10000 
snapshot_prefix: "models/mymodel/caffenet_train" 
solver_mode: CPU 
net: "my_train_val.prototxt" 
train_state { 
    level: 0 
    stage: "" 
} 
I0614 19:32:54.639066 31048 solver.cpp:87] Creating training net from net file: my_train_val.prototxt 
I0614 19:32:54.640214 31048 net.cpp:294] The NetState phase (0) differed from the phase (1) specified by a rule in layer data 
I0614 19:32:54.640645 31048 net.cpp:294] The NetState phase (0) differed from the phase (1) specified by a rule in layer accuracy 
I0614 19:32:54.641345 31048 net.cpp:51] Initializing net from parameters: 
name: "CaffeNet" 
state { 
    phase: TRAIN 
    level: 0 
    stage: "" 
} 
layer { 
    name: "data" 
    type: "ImageData" 
    top: "data" 
    top: "label" 
    include { 
    phase: TRAIN 
    } 
    transform_param { 
    mirror: true 
    crop_size: 256 
    mean_file: "train_mean" 
    } 
    data_param { 
    source: "train_files.txt" 
    batch_size: 2 
    backend: LMDB 
    } 
} 
layer { 
    name: "conv1" 
    type: "Convolution" 
    bottom: "data" 
    top: "conv1" 
    param { 
    lr_mult: 1 
    decay_mult: 1 
    } 
    param { 
    lr_mult: 2 
    decay_mult: 0 
    } 
    convolution_param { 
    num_output: 96 
    kernel_size: 11 
    stride: 4 
    weight_filler { 
     type: "gaussian" 
     std: 0.01 
    } 
    bias_filler { 
     type: "constant" 
     value: 0 
    } 
    } 
} 
layer { 
    name: "relu1" 
    type: "ReLU" 
    bottom: "conv1" 
    top: "conv1" 
} 
layer { 
    name: "pool1" 
    type: "Pooling" 
    bottom: "conv1" 
    top: "pool1" 
    pooling_param { 
    pool: MAX 
    kernel_size: 3 
    stride: 2 
    } 
} 
layer { 
    name: "norm1" 
    type: "LRN" 
    bottom: "pool1" 
    top: "norm1" 
    lrn_param { 
    local_size: 5 
    alpha: 0.0001 
    beta: 0.75 
    } 
} 
layer { 
    name: "conv2" 
    type: "Convolution" 
    bottom: "norm1" 
    top: "conv2" 
    param { 
    lr_mult: 1 
    decay_mult: 1 
    } 
    param { 
    lr_mult: 2 
    decay_mult: 0 
    } 
    convolution_param { 
    num_output: 256 
    pad: 2 
    kernel_size: 5 
    group: 2 
    weight_filler { 
     type: "gaussian" 
     std: 0.01 
    } 
    bias_filler { 
     type: "constant" 
     value: 1 
    } 
    } 
} 
layer { 
    name: "relu2" 
    type: "ReLU" 
    bottom: "conv2" 
    top: "conv2" 
} 
layer { 
    name: "pool2" 
    type: "Pooling" 
    bottom: "conv2" 
    top: "pool2" 
    pooling_param { 
    pool: MAX 
    kernel_size: 3 
    stride: 2 
    } 
} 
layer { 
    name: "norm2" 
    type: "LRN" 
    bottom: "pool2" 
    top: "norm2" 
    lrn_param { 
    local_size: 5 
    alpha: 0.0001 
    beta: 0.75 
    } 
} 
layer { 
    name: "conv3" 
    type: "Convolution" 
    bottom: "norm2" 
    top: "conv3" 
    param { 
    lr_mult: 1 
    decay_mult: 1 
    } 
    param { 
    lr_mult: 2 
    decay_mult: 0 
    } 
    convolution_param { 
    num_output: 384 
    pad: 1 
    kernel_size: 3 
    weight_filler { 
     type: "gaussian" 
     std: 0.01 
    } 
    bias_filler { 
     type: "constant" 
     value: 0 
    } 
    } 
} 
layer { 
    name: "relu3" 
    type: "ReLU" 
    bottom: "conv3" 
    top: "conv3" 
} 
layer { 
    name: "conv4" 
    type: "Convolution" 
    bottom: "conv3" 
    top: "conv4" 
    param { 
    lr_mult: 1 
    decay_mult: 1 
    } 
    param { 
    lr_mult: 2 
    decay_mult: 0 
    } 
    convolution_param { 
    num_output: 384 
    pad: 1 
    kernel_size: 3 
    group: 2 
    weight_filler { 
     type: "gaussian" 
     std: 0.01 
    } 
    bias_filler { 
     type: "constant" 
     value: 1 
    } 
    } 
} 
layer { 
    name: "relu4" 
    type: "ReLU" 
    bottom: "conv4" 
    top: "conv4" 
} 
layer { 
    name: "conv5" 
    type: "Convolution" 
    bottom: "conv4" 
    top: "conv5" 
    param { 
    lr_mult: 1 
    decay_mult: 1 
    } 
    param { 
    lr_mult: 2 
    decay_mult: 0 
    } 
    convolution_param { 
    num_output: 256 
    pad: 1 
    kernel_size: 3 
    group: 2 
    weight_filler { 
     type: "gaussian" 
     std: 0.01 
    } 
    bias_filler { 
     type: "constant" 
     value: 1 
    } 
    } 
} 
layer { 
    name: "relu5" 
    type: "ReLU" 
    bottom: "conv5" 
    top: "conv5" 
} 
layer { 
    name: "pool5" 
    type: "Pooling" 
    bottom: "conv5" 
    top: "pool5" 
    pooling_param { 
    pool: MAX 
    kernel_size: 3 
    stride: 2 
    } 
} 
layer { 
    name: "fc6" 
    type: "InnerProduct" 
    bottom: "pool5" 
    top: "fc6" 
    param { 
    lr_mult: 1 
    decay_mult: 1 
    } 
    param { 
    lr_mult: 2 
    decay_mult: 0 
    } 
    inner_product_param { 
    num_output: 4096 
    weight_filler { 
     type: "gaussian" 
     std: 0.005 
    } 
    bias_filler { 
     type: "constant" 
     value: 1 
    } 
    } 
} 
layer { 
    name: "relu6" 
    type: "ReLU" 
    bottom: "fc6" 
    top: "fc6" 
} 
layer { 
    name: "drop6" 
    type: "Dropout" 
    bottom: "fc6" 
    top: "fc6" 
    dropout_param { 
    dropout_ratio: 0.5 
    } 
} 
layer { 
    name: "fc7" 
    type: "InnerProduct" 
    bottom: "fc6" 
    top: "fc7" 
    param { 
    lr_mult: 1 
    decay_mult: 1 
    } 
    param { 
    lr_mult: 2 
    decay_mult: 0 
    } 
    inner_product_param { 
    num_output: 4096 
    weight_filler { 
     type: "gaussian" 
     std: 0.005 
    } 
    bias_filler { 
     type: "constant" 
     value: 1 
    } 
    } 
} 
layer { 
    name: "relu7" 
    type: "ReLU" 
    bottom: "fc7" 
    top: "fc7" 
} 
layer { 
    name: "drop7" 
    type: "Dropout" 
    bottom: "fc7" 
    top: "fc7" 
    dropout_param { 
    dropout_ratio: 0.5 
    } 
} 
layer { 
    name: "fc8" 
    type: "InnerProduct" 
    bottom: "fc7" 
    top: "fc8" 
    param { 
    lr_mult: 1 
    decay_mult: 1 
    } 
    param { 
    lr_mult: 2 
    decay_mult: 0 
    } 
    inner_product_param { 
    num_output: 2 
    weight_filler { 
     type: "gaussian" 
     std: 0.01 
    } 
    bias_filler { 
     type: "constant" 
     value: 0 
    } 
    } 
} 
layer { 
    name: "loss" 
    type: "SoftmaxWithLoss" 
    bottom: "fc8" 
    bottom: "label" 
    top: "loss" 
} 
I0614 19:32:54.644022 31048 layer_factory.hpp:77] Creating layer data 
I0614 19:32:54.644239 31048 net.cpp:84] Creating Layer data 
I0614 19:32:54.644256 31048 net.cpp:380] data -> data 
I0614 19:32:54.644280 31048 net.cpp:380] data -> label 
I0614 19:32:54.644448 31048 data_transformer.cpp:25] Loading mean file from: train_mean 
I0614 19:32:54.646653 31048 image_data_layer.cpp:38] Opening file 
F0614 19:32:54.646975 31048 image_data_layer.cpp:49] Check failed: !lines_.empty() File is empty 
*** Check failure stack trace: *** 
    @  0x7f83c21c95cd google::LogMessage::Fail() 
    @  0x7f83c21cb433 google::LogMessage::SendToLog() 
    @  0x7f83c21c915b google::LogMessage::Flush() 
    @  0x7f83c21cbe1e google::LogMessageFatal::~LogMessageFatal() 
    @  0x7f83c25e0509 caffe::ImageDataLayer<>::DataLayerSetUp() 
    @  0x7f83c261662e caffe::BasePrefetchingDataLayer<>::LayerSetUp() 
    @  0x7f83c26de897 caffe::Net<>::Init() 
    @  0x7f83c26e0fde caffe::Net<>::Net() 
    @  0x7f83c26e94e5 caffe::Solver<>::InitTrainNet() 
    @  0x7f83c26ea925 caffe::Solver<>::Init() 
    @  0x7f83c26eac4f caffe::Solver<>::Solver() 
    @  0x7f83c26bfbb1 caffe::Creator_SGDSolver<>() 
    @   0x40a4b8 train() 
    @   0x406fa0 main 
    @  0x7f83c113a830 __libc_start_main 
    @   0x4077c9 _start 
    @    (nil) (unknown) 
Aborted (core dumped) 
+0

您列出的錯誤僅僅是問題的崩潰部分。在日誌文件中向上滾動以查找更詳細的消息。應該有一個爲您提供空文件的完全限定名稱的文件。 – Prune

+0

但錯誤似乎很清楚...該文件是空的! – Eliethesaiyan

+0

看來你的'''mean file'''是空的。檢查它的大小。 –

回答

2

您正在使用"ImageData"輸入層。該圖層需要文本文件(在您的案例中爲source: "train_files.txt"),並且期望文件的每一行都包含圖像文件的路徑以及該圖像的分類標籤。
看起來好像這個文件('train_files.txt')在你的情況下是空的。
1.確認'train_files.txt'列出圖像文件名稱。
2.確認列出的圖像文件確實存在於您的機器上,並且您有這些文件的讀取權限。

BTW,
如果你已經完成創建train_lmdb爲什麼不使用的輸入"Data"層直接讀取lmdb的所有的麻煩去?

相關問題