2016-03-07 89 views
0

我想從torch7運行一個例子,只是爲了遇到這個錯誤。嘗試索引本地'f'(一個零值)Lua/Torch7

 
    [email protected]:~/Downloads/tutorials-master/2_supervised$ luajit doall.lua 
    ==> processing options 
    ==> executing all 
    ==> downloading dataset 
    ==> using regular, full training data 
    ==> loading dataset 
    ==> preprocessing data 
    ==> preprocessing data: colorspace RGB -> YUV 
    ==> preprocessing data: normalize each feature (channel) globally 
    ==> preprocessing data: normalize all three channels locally 
    ==> verify statistics 
    training data, y-channel, mean: 0.00067706172257129 
    training data, y-channel, standard deviation: 0.39473240322794 
    test data, y-channel, mean: -0.0010822884348063 
    test data, y-channel, standard deviation: 0.38091408093043 
    training data, u-channel, mean: -0.0048219975630079 
    training data, u-channel, standard deviation: 0.29768662619471 
    test data, u-channel, mean: -0.0030795217110624 
    test data, u-channel, standard deviation: 0.22289780235542 
    training data, v-channel, mean: 0.0036312269637064 
    training data, v-channel, standard deviation: 0.25405592463897 
    test data, v-channel, mean: 0.0033847450016769 
    test data, v-channel, standard deviation: 0.20362829592977 
    ==> visualizing data 
    ==> define parameters 
    ==> construct model 
    ==> here is the model: 
    nn.Sequential { 
     [input -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> (7) -> (8) -> (9) -> (10) -> (11) -> (12) -> output] 
     (1): nn.SpatialConvolutionMM(3 -> 64, 5x5) 
     (2): nn.Tanh 
     (3): nn.Sequential { 
     [input -> (1) -> (2) -> (3) -> (4) -> output] 
     (1): nn.Square 
     (2): nn.SpatialAveragePooling(2,2,2,2) 
     (3): nn.MulConstant 
     (4): nn.Sqrt 
     } 
     (4): nn.SpatialSubtractiveNormalization 
     (5): nn.SpatialConvolutionMM(64 -> 64, 5x5) 
     (6): nn.Tanh 
     (7): nn.Sequential { 
     [input -> (1) -> (2) -> (3) -> (4) -> output] 
     (1): nn.Square 
     (2): nn.SpatialAveragePooling(2,2,2,2) 
     (3): nn.MulConstant 
     (4): nn.Sqrt 
     } 
     (8): nn.SpatialSubtractiveNormalization 
     (9): nn.Reshape(1600) 
     (10): nn.Linear(1600 -> 128) 
     (11): nn.Tanh 
     (12): nn.Linear(128 -> 10) 
    } 
    ==> define loss 
    ==> here is the loss function: 
    nn.ClassNLLCriterion 
    ==> defining some tools 
    luajit: /home/sandesh/torch/install/share/lua/5.1/sys/init.lua:38: attempt to index local 'f' (a nil value) 
    stack traceback: 
     /home/sandesh/torch/install/share/lua/5.1/sys/init.lua:38: in function 'execute' 
     /home/sandesh/torch/install/share/lua/5.1/sys/init.lua:71: in function 'uname' 
     /home/sandesh/torch/install/share/lua/5.1/optim/Logger.lua:38: in function '__init' 
     /home/sandesh/torch/install/share/lua/5.1/torch/init.lua:91: in function 
     [C]: in function 'Logger' 
     4_train.lua:60: in main chunk 
     [C]: in function 'dofile' 
     doall.lua:70: in main chunk 
     [C]: at 0x00406670 


我沒有改變任何代碼在任何LUA文件...

這是4_train.lua文件



    ---------------------------------------------------------------------- 
    -- This script demonstrates how to define a training procedure, 
    -- irrespective of the model/loss functions chosen. 
    -- 
    -- It shows how to: 
    -- + construct mini-batches on the fly 
    -- + define a closure to estimate (a noisy) loss 
    --  function, as well as its derivatives wrt the parameters of the 
    --  model to be trained 
    -- + optimize the function, according to several optmization 
    --  methods: SGD, L-BFGS. 
    -- 
    -- Clement Farabet 
    ---------------------------------------------------------------------- 

    require 'torch' -- torch 
    require 'xlua' -- xlua provides useful tools, like progress bars 
    require 'optim' -- an optimization package, for online and batch methods 

    ---------------------------------------------------------------------- 
    -- parse command line arguments 
    if not opt then 
     print '==> processing options' 
     cmd = torch.CmdLine() 
     cmd:text() 
     cmd:text('SVHN Training/Optimization') 
     cmd:text() 
     cmd:text('Options:') 
     cmd:option('-save', 'results', 'subdirectory to save/log experiments in') 
     cmd:option('-visualize', false, 'visualize input data and weights during training') 
     cmd:option('-plot', false, 'live plot') 
     cmd:option('-optimization', 'SGD', 'optimization method: SGD | ASGD | CG | LBFGS') 
     cmd:option('-learningRate', 1e-3, 'learning rate at t=0') 
     cmd:option('-batchSize', 1, 'mini-batch size (1 = pure stochastic)') 
     cmd:option('-weightDecay', 0, 'weight decay (SGD only)') 
     cmd:option('-momentum', 0, 'momentum (SGD only)') 
     cmd:option('-t0', 1, 'start averaging at t0 (ASGD only), in nb of epochs') 
     cmd:option('-maxIter', 2, 'maximum nb of iterations for CG and LBFGS') 
     cmd:text() 
     opt = cmd:parse(arg or {}) 
    end 

    ---------------------------------------------------------------------- 
    -- CUDA? 
    if opt.type == 'cuda' then 
     model:cuda() 
     criterion:cuda() 
    end 

    ---------------------------------------------------------------------- 
    print '==> defining some tools' 

    -- classes 
    classes = {'1','2','3','4','5','6','7','8','9','0'} 

    -- This matrix records the current confusion across classes 
    confusion = optim.ConfusionMatrix(classes) 

    -- Log results to files 
    trainLogger = optim.Logger(paths.concat(opt.save, 'train.log')) 
    testLogger = optim.Logger(paths.concat(opt.save, 'test.log')) 

    -- Retrieve parameters and gradients: 
    -- this extracts and flattens all the trainable parameters of the mode 
    -- into a 1-dim vector 
    if model then 
     parameters,gradParameters = model:getParameters() 
    end 

    ---------------------------------------------------------------------- 
    print '==> configuring optimizer' 

    if opt.optimization == 'CG' then 
     optimState = { 
      maxIter = opt.maxIter 
     } 
     optimMethod = optim.cg 

    elseif opt.optimization == 'LBFGS' then 
     optimState = { 
      learningRate = opt.learningRate, 
      maxIter = opt.maxIter, 
      nCorrection = 10 
     } 
     optimMethod = optim.lbfgs 

    elseif opt.optimization == 'SGD' then 
     optimState = { 
      learningRate = opt.learningRate, 
      weightDecay = opt.weightDecay, 
      momentum = opt.momentum, 
      learningRateDecay = 1e-7 
     } 
     optimMethod = optim.sgd 

    elseif opt.optimization == 'ASGD' then 
     optimState = { 
      eta0 = opt.learningRate, 
      t0 = trsize * opt.t0 
     } 
     optimMethod = optim.asgd 

    else 
     error('unknown optimization method') 
    end 

    ---------------------------------------------------------------------- 
    print '==> defining training procedure' 

    function train() 

     -- epoch tracker 
     epoch = epoch or 1 

     -- local vars 
     local time = sys.clock() 

     -- set model to training mode (for modules that differ in training and testing, like Dropout) 
     model:training() 

     -- shuffle at each epoch 
     shuffle = torch.randperm(trsize) 

     -- do one epoch 
     print('==> doing epoch on training data:') 
     print("==> online epoch # " .. epoch .. ' [batchSize = ' .. opt.batchSize .. ']') 
     for t = 1,trainData:size(),opt.batchSize do 
      -- disp progress 
      xlua.progress(t, trainData:size()) 

      -- create mini batch 
      local inputs = {} 
      local targets = {} 
      for i = t,math.min(t+opt.batchSize-1,trainData:size()) do 
      -- load new sample 
      local input = trainData.data[shuffle[i]] 
      local target = trainData.labels[shuffle[i]] 
      if opt.type == 'double' then input = input:double() 
      elseif opt.type == 'cuda' then input = input:cuda() end 
      table.insert(inputs, input) 
      table.insert(targets, target) 
      end 

      -- create closure to evaluate f(X) and df/dX 
      local feval = function(x) 
          -- get new parameters 
          if x ~= parameters then 
           parameters:copy(x) 
          end 

          -- reset gradients 
          gradParameters:zero() 

          -- f is the average of all criterions 
          local f = 0 

          -- evaluate function for complete mini batch 
          for i = 1,#inputs do 
           -- estimate f 
           local output = model:forward(inputs[i]) 
           local err = criterion:forward(output, targets[i]) 
           f = f + err 

           -- estimate df/dW 
           local df_do = criterion:backward(output, targets[i]) 
           model:backward(inputs[i], df_do) 

           -- update confusion 
           confusion:add(output, targets[i]) 
          end 

          -- normalize gradients and f(X) 
          gradParameters:div(#inputs) 
          f = f/#inputs 

          -- return f and df/dX 
          return f,gradParameters 
         end 

      -- optimize on current mini-batch 
      if optimMethod == optim.asgd then 
      _,_,average = optimMethod(feval, parameters, optimState) 
      else 
      optimMethod(feval, parameters, optimState) 
      end 
     end 

     -- time taken 
     time = sys.clock() - time 
     time = time/trainData:size() 
     print("\n==> time to learn 1 sample = " .. (time*1000) .. 'ms') 

     -- print confusion matrix 
     print(confusion) 

     -- update logger/plot 
     trainLogger:add{['% mean class accuracy (train set)'] = confusion.totalValid * 100} 
     if opt.plot then 
      trainLogger:style{['% mean class accuracy (train set)'] = '-'} 
      trainLogger:plot() 
     end 

     -- save/log current net 
     local filename = paths.concat(opt.save, 'model.net') 
     os.execute('mkdir -p ' .. sys.dirname(filename)) 
     print('==> saving model to '..filename) 
     torch.save(filename, model) 

     -- next epoch 
     confusion:zero() 
     epoch = epoch + 1 
    end 



這doall.lua



    ---------------------------------------------------------------------- 
    -- This tutorial shows how to train different models on the street 
    -- view house number dataset (SVHN), 
    -- using multiple optimization techniques (SGD, ASGD, CG), and 
    -- multiple types of models. 
    -- 
    -- This script demonstrates a classical example of training 
    -- well-known models (convnet, MLP, logistic regression) 
    -- on a 10-class classification problem. 
    -- 
    -- It illustrates several points: 
    -- 1/ description of the model 
    -- 2/ choice of a loss function (criterion) to minimize 
    -- 3/ creation of a dataset as a simple Lua table 
    -- 4/ description of training and test procedures 
    -- 
    -- Clement Farabet 
    ---------------------------------------------------------------------- 
    require 'torch' 

    ---------------------------------------------------------------------- 
    print '==> processing options' 

    cmd = torch.CmdLine() 
    cmd:text() 
    cmd:text('SVHN Loss Function') 
    cmd:text() 
    cmd:text('Options:') 
    -- global: 
    cmd:option('-seed', 1, 'fixed input seed for repeatable experiments') 
    cmd:option('-threads', 2, 'number of threads') 
    -- data: 
    cmd:option('-size', 'full', 'how many samples do we load: small | full | extra') 
    -- model: 
    cmd:option('-model', 'convnet', 'type of model to construct: linear | mlp | convnet') 
    -- loss: 
    cmd:option('-loss', 'nll', 'type of loss function to minimize: nll | mse | margin') 
    -- training: 
    cmd:option('-save', 'results', 'subdirectory to save/log experiments in') 
    cmd:option('-plot', false, 'live plot') 
    cmd:option('-optimization', 'SGD', 'optimization method: SGD | ASGD | CG | LBFGS') 
    cmd:option('-learningRate', 1e-3, 'learning rate at t=0') 
    cmd:option('-batchSize', 1, 'mini-batch size (1 = pure stochastic)') 
    cmd:option('-weightDecay', 0, 'weight decay (SGD only)') 
    cmd:option('-momentum', 0, 'momentum (SGD only)') 
    cmd:option('-t0', 1, 'start averaging at t0 (ASGD only), in nb of epochs') 
    cmd:option('-maxIter', 2, 'maximum nb of iterations for CG and LBFGS') 
    cmd:option('-type', 'double', 'type: double | float | cuda') 
    cmd:text() 
    opt = cmd:parse(arg or {}) 

    -- nb of threads and fixed seed (for repeatable experiments) 
    if opt.type == 'float' then 
     print('==> switching to floats') 
     torch.setdefaulttensortype('torch.FloatTensor') 
    elseif opt.type == 'cuda' then 
     print('==> switching to CUDA') 
     require 'cunn' 
     torch.setdefaulttensortype('torch.FloatTensor') 
    end 
    torch.setnumthreads(opt.threads) 
    torch.manualSeed(opt.seed) 

    ---------------------------------------------------------------------- 
    print '==> executing all' 

    dofile '1_data.lua' 
    dofile '2_model.lua' 
    dofile '3_loss.lua' 
    dofile '4_train.lua' 
    dofile '5_test.lua' 

    ---------------------------------------------------------------------- 
    print '==> training!' 

    while true do 
     train() 
     test() 
    end 



的git的鏈接是https://github.com/torch/tutorials/blob/master/2_supervised/4_train.lua

還有我沒有使用CUDA,因爲我沒有一個GPU

+2

這不是一個調試服務!你不能只是粘貼代碼和錯誤消息,並期望別人做你的工作...... – Piglet

回答

0

我不會告訴你什麼是錯了,因爲你沒有顯示任何努力來解決你自己的問題。但我會告訴你如何繼續。

luajit:/home/sandesh/torch/install/share/lua/5.1/sys/init.lua:38: 嘗試索引本地 'F'(一個零值) 棧回溯: /home/sandesh/torch/install/share/lua/5.1/sys/init.lua:38:in function'execute' /home/sandesh/torch/install/share/lua/5.1/sys/init.lua:71 :in'uname' /home/sandesh/torch/install/share/lua/5.1/optim/Logger.lua:38:in function'__init' /home/sandesh/torch/install/share/lua/5.1 /torch/init.lua:91:功能 [C]:在功能'記錄器'中

這告訴你,init.lua第38行中的某些本地f是零,這會導致問題。所以打開這個文件,找出f的值應該從哪裏來,爲什麼它是零。然後解決該問題。另請參閱是否有更新版本的Tor,它可以正確處理f。如果沒有,請自行更改代碼,如果可能的話。否則,請嘗試通過驗證輸入到手電筒來阻止這種情況的發生。

+0

看起來像一個錯誤,我得到這種情況偶然需要'路徑'的結果。 – Oliver

相關問題