2014-04-23 114 views
1

我使用OpenCV來校準和糾正立體聲系統。我有一個立體相機具有收斂目光,其實我跑這個順序對這些功能:OpenCV在圖像上有太多黑色區域的校正C++

for(int j=0; j < ChessBoard.numSquares; j++) 
    obj.push_back(Point3f((j/ChessBoard.numCornersHor)*ChessBoard.squareDim, (j%ChessBoard.numCornersHor)*ChessBoard.squareDim, 0.0f)); 
[...] 

然後我這個循環的,我想的圖像的數量來獲取

found_L = findChessboardCorners(image_L, ChessBoard.board_sz, corners_L, CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_NORMALIZE_IMAGE + CV_CALIB_CB_FILTER_QUADS + CALIB_CB_FAST_CHECK); 
found_R= findChessboardCorners(image_R, ChessBoard.board_sz, corners_R, CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_NORMALIZE_IMAGE + CV_CALIB_CB_FILTER_QUADS + CALIB_CB_FAST_CHECK); 
found = found_L && found_R; 
if(found) 
    { 
    cornerSubPix(image_L, corners_L, Size(11, 11), Size(-1, -1), TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1)); 
    cornerSubPix(image_R, corners_R, Size(11, 11), Size(-1, -1), TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1)); 
    drawChessboardCorners(image_L, ChessBoard.board_sz, corners_L, found); 
    drawChessboardCorners(image_R, ChessBoard.board_sz, corners_R, found); 

    image_points[0].push_back(corners_L); 
    image_points[1].push_back(corners_R); 
    object_points.push_back(obj); 
    printf("Right: coordinates stored\n"); 
    printf("Left: coordinates stored\n"); 
    } 

此塊後,我把這二:

cameraMatrix[0] = Mat::eye(3, 3, CV_64F); 
cameraMatrix[1] = Mat::eye(3, 3, CV_64F); 

calibrateCamera(object_points, image_points[0], imageSize, cameraMatrix[0], distCoeffs[0], rvecs_L, tvecs_L); 

calibrateCamera(object_points, image_points[1], imageSize, cameraMatrix[1], distCoeffs[1], rvecs_R, tvecs_R); 

然後:

rms = stereoCalibrate(object_points, image_points[0], image_points[1], 
        cameraMatrix[0], distCoeffs[0], 
        cameraMatrix[1], distCoeffs[1], 
        imageSize, R, T, E, F, 
        TermCriteria(CV_TERMCRIT_ITER+CV_TERMCRIT_EPS, 100, 1e-5), 
        CV_CALIB_FIX_ASPECT_RATIO+CV_CALIB_FIX_INTRINSIC); 

最後:

stereoRectify(cameraMatrix[0], distCoeffs[0], 
        cameraMatrix[1], distCoeffs[1], 
        imageSize, R, T, R1, R2, P1, P2, Q, 
        CALIB_ZERO_DISPARITY, -1, imageSize, &roi1, &roi2); 

initUndistortRectifyMap(cameraMatrix[0], distCoeffs[0], R1, P1, imageSize, CV_16SC2, map11, map12); 
initUndistortRectifyMap(cameraMatrix[1], distCoeffs[1], R2, P2, imageSize, CV_16SC2, map21, map22); 
remap(imgL, imgL, map11, map12, INTER_LINEAR,BORDER_CONSTANT, Scalar()); 
remap(imgR, imgR, map21, map22, INTER_LINEAR,BORDER_CONSTANT, Scalar()); 

這基本上是我在做什麼,但結果是非常糟糕的,因爲圖像有一個非常大的黑色區域。這裏有一個例子:

Left From OpenCV

,這是我必須得到糾偏圖像,即直接從相機拍攝的修正:

Left From Camera

,你可以看到它似乎圖像在右側被翻譯並切割,右側的圖像是相同的,但是在左側被翻譯並且結果幾乎相同。

那麼我怎樣才能達到類似於最後一個更好的結果?哪裏有問題? 作爲一個額外的數據,我注意到有效值不是很好,約爲0.4,而重新投影誤差約爲0.2,我知道他們必須低一點,但我已經嘗試了很多次不同的模式,照明等等,在校準中,但我總是採取相同的結果,甚至最差。

回答

1

嘗試調用stereoRectify這樣的:

stereoRectify(cameraMatrix[0], distCoeffs[0], 
       cameraMatrix[1], distCoeffs[1], 
       imageSize, R, T, R1, R2, P1, P2, Q, 
       0, -1, imageSize, &roi1, &roi2); 

即使用0代替標誌CALIB_ZERO_DISPARITY

此外,爲了改善stereoCalibrate獲得的RMS,嘗試用標誌CV_CALIB_USE_INTRINSIC_GUESS(見this related answer):

rms = stereoCalibrate(object_points, image_points[0], image_points[1], 
       cameraMatrix[0], distCoeffs[0], 
       cameraMatrix[1], distCoeffs[1], 
       imageSize, R, T, E, F, 
       TermCriteria(CV_TERMCRIT_ITER+CV_TERMCRIT_EPS, 100, 1e-5), 
       CV_CALIB_USE_INTRINSIC_GUESS+ 
        CV_CALIB_FIX_ASPECT_RATIO+CV_CALIB_FIX_INTRINSIC); 
+0

這種固定的主要問題,我看到一個很好的視差圖,但現在的Q矩陣,我用於reprojectImageTo3D是非常糟糕的。我將結果用於距離計算,並在可視化器中查看3D重構,但現在甚至不像以前那樣接近真相。所以Q矩陣不能很好地計算出來。爲什麼? –

+0

@Elminster_cs找出爲什麼'reprojectImageTo3D'結果不好的原因需要額外的數據。你可以問一個關於這個主題的新問題,並且包含適當的代碼(調用'reprojectImageTo3D'等)和圖片嗎?作爲評論,在這裏發佈鏈接到這個新問題,所以這個鏈接保持與這個問題相關。謝謝。 – AldurDisciple

+0

reproject的問題是一個簡單的轉換:我使用的是disptype == CV_16S,我需要一個32F。所以我只是把: disp.convertTo(disp16,CV_32F,1./16);現在正在工作,謝謝! –