2013-10-22 58 views
0

繼續我的計算機視覺的東西,我得到了一個點,我計算N個相機中的一個補丁的描述符。 問題是,當我做了描述符的計算在OpenCV的功能是std :: cv的向量::墊

descriptor.compute(image, vecKeypoints, matDescriptors); 

其中vecKeypointscv::KeyPointsmatDescriptors向量的是,根據OpenCV的的doc它被用填充的cv::Mat計算出的描述符。由於我有N臺攝像機,因此每臺攝像機計算多個描述符,因此我爲每臺N攝像機存儲K個描述符。因此我創建描述符的向量(即,矩陣)

std::vector<cv::Mat> descriptors; 

在每次迭代我計算新matDescriptors並將它推給向量descriptors。我看到的問題是,在數據存儲每個matDescriptors的地址是每個元素的矢量描述

據我所知道的一樣,當我做vector.push_back(arg)副本的arg被製成並存儲在向量中,那麼,爲什麼我有相同的地址? &(descriptors[0].data)不應該與&(descriptors[1].data)不同嗎?

赫雷什代碼

std::vector<Pixel> patchPos; 
std::vector<Pixel> disparityPatches; 

//cv::Ptr<cv::DescriptorExtractor> descriptor = cv::DescriptorExtractor::create("ORB"); 
cv::ORB descriptor(0, 1.2f, 8, 0); 
std::vector<cv::Mat> camsDescriptors; 
std::vector<cv::Mat> refsDescriptors; 

uint iPatchV = 0; 
uint iPatchH = 0; 

// FOR EACH BLOCK OF PATCHES (there are 'blockSize' patches in one block) 
for (uint iBlock = 0; iBlock < nBlocks; iBlock++) 
{ 
    // FOR EACH PATCH IN THE BLOCK 
    for(uint iPatch = iBlock*blockSize; iPatch < (iBlock*blockSize)+blockSize; iPatch++) 
    { 
     // GET THE POSITION OF THE upper-left CORNER(row, col) AND 
     // STORE THE COORDINATES OF THE PIXELS INSIDE THE PATCH 
     for (uint pRow = (iPatch*patchStep)/camRef->getWidth(), pdRow = 0; pRow < iPatchV+patchSize; pRow++, pdRow++) 
     { 
      for (uint pCol = (iPatch*patchStep)%camRef->getWidth(), pdCol = 0; pCol < iPatchH+patchSize; pCol++, pdCol++) 
      { 
       patchPos.push_back(Pixel(pCol, pRow)); 
      } 
     } 

     // KEYPOINT TO GET THE DESCRIPTOR OF THE CURRENT PATCH IN THE REFERENCE CAMERA 
     std::vector<cv::KeyPoint> refPatchKeyPoint; 
     //   patchCenter*patchSize+patchCenter IS the index of the center pixel after 'linearizing' the patch 
     refPatchKeyPoint.push_back(cv::KeyPoint(patchPos[patchCenter*patchSize+patchCenter].getX(), 
               patchPos[patchCenter*patchSize+patchCenter].getY(), patchSize)); 

     // COMPUTE THE DESCRIPTOR OF THE PREVIOUS KEYPOINT 
     cv::Mat d; 
     descriptor.compute(Image(camRef->getHeight(), camRef->getWidth(), CV_8U, (uchar*)camRef->getData()), 
          refPatchKeyPoint, d); 
     refsDescriptors.push_back(d); // This is OK, address X has data of 'd' 

     //FOR EVERY OTHER CAMERA 
     for (uint iCam = 0; iCam < nTotalCams-1; iCam++) 
     { 
      //FOR EVERY DISPARITY LEVEL 
      for (uint iDispLvl = 0; iDispLvl < disparityLevels; iDispLvl++) 
      { 
       ... 
       ... 

       //COMPUTE THE DISPARITY FOR EACH OF THE PIXEL COORDINATES IN THE PATCH 
       for (uint iPatchPos = 0; iPatchPos < patchPos.size(); iPatchPos++) 
       { 
        disparityPatches.push_back(Pixel(patchPos[iPatchPos].getX()+dispNodeX, patchPos[iPatchPos].getY()+dispNodeY)); 
       } 
      } 

      // KEYPOINTS TO GET THE DESCRIPTORS OF THE 50.DISPAIRED-PATCHES IN CURRENT CAMERA 
      ... 
      ... 
      descriptor.compute(Image(camList[iCam]->getHeight(), camList[iCam]->getWidth(), CV_8U, (uchar*)camList[iCam]->getData()), 
           camPatchKeyPoints, d); 
      // First time this executes is OK, address is different from the previous 'd' 
      // Second time, the address is the same as the previously pushed 'd' 
      camsDescriptors.push_back(d); 

      disparityPatches.clear(); 
      camPatchKeyPoints.clear(); 
     } 
    } 
} 

回答

2

墊的總體視圖是某種用於像素智能指針,所以太A = B將具有共享的像素a和b。爲的push_back類似的情況()

如果你需要一個「深拷貝」,用墊::克隆()

0

在每個循環中,請確保調用該函數CV ::墊::釋放()之前附加到矢量。