根據這個答案(https://groups.google.com/forum/#!msg/torch7/rmIcBYCiFG8/IC68Xzd3DgAJ),似乎火炬自動釋放不再使用的張量。我已經有過這個問題,它是由lua自動釋放的張量指針,導致c程序中的段錯誤。但是,從我的實驗看來,不是在C代碼上手動調用free會導致內存泄漏,程序內存在不斷增加。lua垃圾收集從C推送的火炬張力嗎?
這裏是我的測試代碼:
#include <iostream>
#include <chrono>
#include <thread>
#include <lua.hpp>
extern "C" {
#include <TH.h>
#include <luaT.h>
}
using namespace std;
int main(int argc, char** argv)
{
lua_State* L = luaL_newstate();
luaL_openlibs(L);
lua_getglobal(L, "require");
lua_pushstring(L, "torch");
lua_pcall(L,1,0,0); // require "torch"
int h=224, w=224, nb_channel=3;
int len = h * w * nb_channel;
long stride_1 = h * w;
long stride_2 = w;
long stride_3 = 1;
for (size_t i = 0 ; i < 1000000 ; ++i)
{
float *tensorData = (float*)malloc(sizeof(float)*len); // Should be when the storage is freed
THFloatStorage *storage = THFloatStorage_newWithData(tensorData, len); // Memory released when tensor input is freed by lua garbadge collector
THFloatTensor* input = THFloatTensor_newWithStorage3d(storage,
0, // Offset
nb_channel, stride_1, // Channels
h, stride_2, // Height
w, stride_3); // Width
// Do some initialisation of the tensor...
luaT_pushudata(L, (void*) input, "torch.FloatTensor"); // Send the tensor to lua
// Do some stuff with the input... (call lua torch scripts...)
lua_pop(L, 1);
// If those two lines is not set, we get a memory leak
THFloatTensor_free(input); // Will sometimes create a segfault as the tensor seems to be deleted by the lua garbadge collector
THFloatStorage_free(storage);
if (i%10000 == 0)
std::cout << i << std::endl;
std::this_thread::sleep_for(std::chrono::microseconds(1));
}
return 0;
}
如何正確管理使用C和Lua的內存,避免段錯誤,而無需內存泄漏?
這不是C代碼。 – Olaf
我知道,但問題與c庫(luaT.h,lua.h)有關。該示例的cpp部分與該問題無關。 – Conchylicultor