1

我想要計算2個圖像(同一個攝像機拍攝的靜態場景的不同照片)的基本矩陣。OpenCV:基本矩陣精度

我使用findFundamentalMat進行了計算,我使用該結果來計算其他矩陣(基本,旋轉,...)。結果顯然是錯誤的。所以,我試圖確定計算出的基本矩陣的準確性。

使用極線約束方程,I計算基本矩陣誤差。錯誤非常高(比如幾百)。我不知道我的代碼有什麼問題。我非常感謝任何幫助。特別是:在基本矩陣計算中有什麼我缺少的東西?並且是我計算錯誤的方法嗎?

另外,我運行的代碼數量非常不同的匹配。通常有很多異常值。例如在有80多場比賽的情況下,只有10個內線。

Mat img_1 = imread("imgl.jpg", CV_LOAD_IMAGE_GRAYSCALE); 
Mat img_2 = imread("imgr.jpg", CV_LOAD_IMAGE_GRAYSCALE); 
if(!img_1.data || !img_2.data) 
{ return -1; } 

//-- Step 1: Detect the keypoints using SURF Detector 

int minHessian = 1000; 
SurfFeatureDetector detector(minHessian); 
std::vector<KeyPoint> keypoints_1, keypoints_2; 

detector.detect(img_1, keypoints_1); 
detector.detect(img_2, keypoints_2); 

//-- Step 2: Calculate descriptors (feature vectors) 

SurfDescriptorExtractor extractor; 
Mat descriptors_1, descriptors_2; 
extractor.compute(img_1, keypoints_1, descriptors_1); 
extractor.compute(img_2, keypoints_2, descriptors_2); 

//-- Step 3: Matching descriptor vectors with a brute force matcher 

BFMatcher matcher(NORM_L1, true); 
std::vector<DMatch> matches; 
matcher.match(descriptors_1, descriptors_2, matches); 

vector<Point2f>imgpts1,imgpts2; 
for(unsigned int i = 0; i<matches.size(); i++) 
{ 
    // queryIdx is the "left" image 
    imgpts1.push_back(keypoints_1[matches[i].queryIdx].pt); 
    // trainIdx is the "right" image 
    imgpts2.push_back(keypoints_2[matches[i].trainIdx].pt); 
} 

//-- Step 4: Calculate Fundamental matrix 

Mat f_mask; 
Mat F = findFundamentalMat (imgpts1, imgpts2, FM_RANSAC, 0.5, 0.99, f_mask); 

//-- Step 5: Calculate Fundamental matrix error 

//Camera intrinsics 
double data[] = {1189.46 , 0.0, 805.49, 
       0.0, 1191.78, 597.44, 
       0.0, 0.0, 1.0}; 
Mat K(3, 3, CV_64F, data); 
//Camera distortion parameters 
double dist[] = { -0.03432, 0.05332, -0.00347, 0.00106, 0.00000}; 
Mat D(1, 5, CV_64F, dist); 

//working with undistorted points 
vector<Point2f> undistorted_1,undistorted_2; 
vector<Point3f> line_1, line_2; 
undistortPoints(imgpts1,undistorted_1,K,D); 
undistortPoints(imgpts2,undistorted_2,K,D); 
computeCorrespondEpilines(undistorted_1,1,F,line_1); 
computeCorrespondEpilines(undistorted_2,2,F,line_2); 

double f_err=0.0; 
double fx,fy,cx,cy; 
fx=K.at<double>(0,0);fy=K.at<double>(1,1);cx=K.at<double>(0,2);cy=K.at<double>(1,2); 
Point2f pt1, pt2; 
int inliers=0; 
//calculation of fundamental matrix error for inliers 
for (int i=0; i<f_mask.size().height; i++) 
    if (f_mask.at<char>(i)==1) 
    { 
     inliers++; 
     //calculate non-normalized values 
     pt1.x = undistorted_1[i].x * fx + cx; 
     pt1.y = undistorted_1[i].y * fy + cy; 
     pt2.x = undistorted_2[i].x * fx + cx; 
     pt2.y = undistorted_2[i].y * fy + cy; 
     f_err += = fabs(pt1.x*line_2[i].x + 
       pt1.y*line_2[i].y + line_2[i].z) 
       + fabs(pt2.x*line_1[i].x + 
       pt2.y*line_1[i].y + line_1[i].z); 
    } 

double AvrErr = f_err/inliers; 
+0

你可以發佈圖像'imgl.jpg'和'imgr.jpg'嗎? –

+0

抱歉,延遲。以下是圖片: img1:![Left](http://i42.tinypic.com/29y37s4.jpg)和 img2:![Right](http://i41.tinypic.com/nmmhjd.jpg) – Ali

回答

0

我相信這個問題是因爲你的計算基於蠻力匹配的基本矩陣而已,你應該爲這些相應的點更多一些優化,比如定量測試和對稱測試。 我建議你準備第233頁,從書「OpenCV2計算機視覺應用程序編程手冊」第9章。 其解釋得非常好!