2016-05-17 30 views

回答

1

似乎是你可以創建3 cv :: Mat來表示圖像RGB +深度+紅外的每個組件。因此,它應該像

CV ::墊frameColor = CV ::墊::零(resolutionColor.height, resolutionColor.width,CV_8UC3); cv :: Mat frameDepth = cv :: Mat :: zeros(resolutionDepth.height,resolutionDepth.width, CV_32FC1); cv :: Mat frameIR = cv :: Mat :: zeros(resolutionIR.height, resolutionIR.width,CV_8UC1);

檢查這個論壇 - > https://software.intel.com/en-us/forums/realsense/topic/538066Convert a PXCImage into an OpenCV Mat

0

一旦你的樣本圖像

PXCImage::ImageData c_image; 
cv::Mat p_image; 
PXCCapture::Sample *sample = sm->QuerySample(); 

      if (sample->color != NULL) 
      { 

       sample->color->AcquireAccess(PXCImage::ACCESS_READ_WRITE, PXCImage::PIXEL_FORMAT_RGB32, &c_image); 
       PXCImage::ImageInfo inputInfo = sample->color->QueryInfo(); 
       cv::Mat img = cv::Mat(inputInfo.height, inputInfo.width, CV_8UC4, c_image.planes[0], c_image.pitches[0]); 
       sample->color->ReleaseAccess(&c_image); 


      } 
+0

我如何把這個一個循環? – Pototo

+0

你的意思是,我如何閱讀整個rssdk文件? –

+0

我如何不斷從realsense圖像轉換爲openCV圖像,以便我可以看到實際的視頻而不是靜止圖像 – Pototo

1

如果您使用的是新librealsense SDK爲Ubuntu:那麼它是一樣簡單創建一個cv :: Mat的新實例,以獲取相機的高度和寬度以及所獲取圖像的尺寸。

cv::Mat vis = cv::Mat(1080,1920,CV_8UC3);

之後,唯一的步驟左側是分配由流提供給矩陣的vis.data屬性數據

vis.data = (uchar*)reinterpret_cast<const uint8_t *>(dev->get_frame_data(rs::stream::color));` 

的(UCHAR *)鑄造是投uint8_t格式轉換爲cv :: Mat的原生格式。

因爲Realsense流提供顏色BGR和OpenCV使用RGB來表示圖像,最後一步是將矩陣變換,以正確的格式:

cv::cvtColor(vis, vis, CV_BGR2RGB); 
0
/*** 
Returns the next frame if next frame is recorded 
Returns the previous frame if next frame is not recorded 
***/ 
void Converter::ConvertPXCImageToOpenCVMat(Intel::RealSense::Image *inImg, Intel::RealSense::ImageData data, cv::Mat *outImg) { 
    auto cvDataType = 0; 
    auto cvDataWidth = 0; 

    auto imgInfo = inImg->QueryInfo(); 

    switch (data.format) { 
     /* STREAM_TYPE_COLOR */ 
    case Intel::RealSense::Image::PIXEL_FORMAT_YUY2: /* YUY2 image */ 
    case Intel::RealSense::Image::PIXEL_FORMAT_NV12: /* NV12 image */ 
     throw; // Not implemented 
    case Intel::RealSense::Image::PIXEL_FORMAT_RGB32: /* BGRA layout on a little-endian machine */ 
     cvDataType = CV_8UC4; 
     cvDataWidth = 4; 
     break; 
    case Intel::RealSense::Image::PIXEL_FORMAT_RGB24: /* BGR layout on a little-endian machine */ 
     cvDataType = CV_8UC3; 
     cvDataWidth = 3; 
     break; 
    case Intel::RealSense::Image::PIXEL_FORMAT_Y8: /* 8-Bit Gray Image, or IR 8-bit */ 
     cvDataType = CV_8U; 
     cvDataWidth = 1; 
     break; 
     /* STREAM_TYPE_DEPTH */ 
    case Intel::RealSense::Image::PIXEL_FORMAT_DEPTH: /* 16-bit unsigned integer with precision mm. */ 
    case Intel::RealSense::Image::PIXEL_FORMAT_DEPTH_RAW: /* 16-bit unsigned integer with device specific precision (call device->QueryDepthUnit()) */ 
     cvDataType = CV_16U; 
     cvDataWidth = 2; 
     break; 
    case Intel::RealSense::Image::PIXEL_FORMAT_DEPTH_F32: /* 32-bit float-point with precision mm. */ 
     cvDataType = CV_32F; 
     cvDataWidth = 4; 
     break; 
     /* STREAM_TYPE_IR */ 
    case Intel::RealSense::Image::PIXEL_FORMAT_Y16: /* 16-Bit Gray Image */ 
     cvDataType = CV_16U; 
     cvDataWidth = 2; 
     break; 
    case Intel::RealSense::Image::PIXEL_FORMAT_Y8_IR_RELATIVE: /* Relative IR Image */ 
     cvDataType = CV_8U; 
     cvDataWidth = 1; 
     break; 
    default: 
     break; 
    } 

    // suppose that no other planes 
    if (data.planes[1] != nullptr) throw; // not implemented 
              // suppose that no sub pixel padding needed 
    if (data.pitches[0] % cvDataWidth != 0) throw; // not implemented 

    outImg->create(imgInfo.height, data.pitches[0]/cvDataWidth, cvDataType); 

    //memcpy(outImg->data, data.planes[0], imgInfo.height*imgInfo.width*cvDataWidth * sizeof(pxcBYTE)); 
    memcpy(outImg->data, data.planes[0], imgInfo.height*imgInfo.width*cvDataWidth * sizeof(uint8_t)); 
}