2009-06-10 76 views
7

我遇到了cvProjectPoints2函數的一些麻煩。 以下是來自O'Reilly的「學習OpenCV的」一書的功能概述:如何使用OpenCV cvProjectPoints2函數

void cvProjectPoints2(
const CvMat* object_points, 
const CvMat* rotation_vector, 
const CvMat* translation_vector, 
const CvMat* intrinsic_matrix, 
const CvMat* distortion_coeffs, 
CvMat* image_points, 
); 

的第一個參數,object_points,是你想要的投射點的列表;它只是一個包含點位置的N×3矩陣。您可以在對象自己的本地座標系中給出這些座標系,然後提供3乘1矩陣rotation_vector *和translation_vector來關聯這兩個座標。如果在您的特定情況下更容易直接在相機座標的工作,那麼你可以只給在該系統中object_points和設置都rotation_vectortranslation_vector包含0。†

intrinsic_matrixdistortion_coeffs只是相機內部信息以及來自章節 中討論的cvCalibrateCamera2()的畸變係數11. image_points參數是一個N×2矩陣,計算結果將寫入其中。

首先,似乎有一個object_points數組的bug。如果只有一個點,即N = 1,則程序崩潰。無論如何,我有幾個相機內在參數和投影矩陣。失真係數爲0,即沒有失真。 爲簡單起見,假設我有2個攝像頭:

double intrinsic[2][3][3] = { 
//camera 0 
1884.190000, 0, 513.700000, 
0.0, 1887.490000, 395.609000, 
0.0, 0.0, 1.0, 
//camera 4 
1877.360000, 0.415492, 579.467000, 
0.0, 1882.430000, 409.612000, 
0.0, 0.0, 1.0 
}; 

double projection[2][3][4] = { 
//camera 0 
0.962107, -0.005824, 0.272486, -14.832727, 
0.004023, 0.999964, 0.007166, 0.093097, 
-0.272519, -0.005795, 0.962095, -0.005195, 
//camera 4 
1.000000, 0.000000, -0.000000, 0.000006, 
0.000000, 1.000000, -0.000000, 0.000001, 
-0.000000, -0.000000, 1.000000, -0.000003 
}; 

據我明白,該信息足以投射任何攝像機視圖的任何點(x,Y,Z)。這裏,在x,y,z座標中,相機4的光學中心是世界座標的原點。

這裏是我的代碼:

#include <cv.h> 
#include <highgui.h> 
#include <cvaux.h> 
#include <cxcore.h> 
#include <stdio.h> 

double intrinsic[2][3][3] = { 
//0 
1884.190000, 0, 513.700000, 
0.0, 1887.490000, 395.609000, 
0.0, 0.0, 1.0, 
//4 
1877.360000, 0.415492, 579.467000, 
0.0, 1882.430000, 409.612000, 
0.0, 0.0, 1.0 
}; 

double projection[2][3][4] = { 
//0 
0.962107, -0.005824, 0.272486, -14.832727, 
0.004023, 0.999964, 0.007166, 0.093097, 
-0.272519, -0.005795, 0.962095, -0.005195, 
//4 
1.000000, 0.000000, -0.000000, 0.000006, 
0.000000, 1.000000, -0.000000, 0.000001, 
-0.000000, -0.000000, 1.000000, -0.000003 
}; 


int main() { 
    CvMat* camera_matrix[2]; // 
    CvMat* rotation_matrix[2]; // 
    CvMat* dist_coeffs[2]; 
    CvMat* translation[2]; 
    IplImage* image[2]; 
    image[0] = cvLoadImage("color-cam0-f000.bmp", 1); 
    image[1] = cvLoadImage("color-cam4-f000.bmp", 1); 
    CvSize image_size; 
    image_size = cvSize(image[0]->width, image[0]->height); 

    for (int m=0; m<2; m++) { 
     camera_matrix[m] = cvCreateMat(3, 3, CV_32F); 
     dist_coeffs[m] = cvCreateMat(1, 4, CV_32F); 
     rotation_matrix[m] = cvCreateMat(3, 3, CV_32F); 
     translation[m] = cvCreateMat(3, 1, CV_32F); 
    } 

    for (int m=0; m<2; m++) { 
     for (int i=0; i<3; i++) 
      for (int j=0; j<3; j++) { 
       cvmSet(camera_matrix[m],i,j, intrinsic[m][i][j]); 
       cvmSet(rotation_matrix[m],i,j, projection[m][i][j]); 
      } 
     for (int i=0; i<4; i++) 
      cvmSet(dist_coeffs[m], 0, i, 0); 
     for (int i=0; i<3; i++) 
      cvmSet(translation[m], i, 0, projection[m][i][3]); 
    } 

    CvMat* vector = cvCreateMat(3, 1, CV_32F); 
    CvMat* object_points = cvCreateMat(10, 3, CV_32F); 
    cvmSet(object_points, 0, 0, 1000); 
    cvmSet(object_points, 0, 1, 500); 
    cvmSet(object_points, 0, 2, 100); 

    CvMat* image_points = cvCreateMat(10, 2, CV_32F); 
    int m = 0; 
    cvRodrigues2(rotation_matrix[m], vector); 
    cvProjectPoints2(object_points, vector, translation[m], camera_matrix[m], dist_coeffs[m], image_points); 
    printf("%f\n", cvmGet(image_points, 0, 0)); 
    printf("%f\n", cvmGet(image_points, 0, 1)); 
    return 0; 
} 

的圖像是1024 * 768,和z的可見部分被知道是44和120那麼,點應在兩個攝像機可以看出之間,對?但結果是絕對錯誤的。即使對於m = 1。 我做錯了什麼?

+0

我沒有太多的時間來看看你的代碼,但究竟是在相機內部函數矩陣0.415492因素攝像機4?我預計這是0.0。 – yhw42 2010-03-04 18:43:30

回答

3

是的,cvProjectPoints用於投影點數組。你可以根據項目的一個點用簡單的矩陣運算:

CvMat *pt = cvCreateMat(3, 1, CV_32FC1); 
CvMat *pt_rt = cvCreateMat(3, 1, CV_32FC1); 
CvMat *proj_pt = cvCreateMat(3, 1, CV_32FC1); 
cvMatMulAdd(rotMat, pt, translation, pt_rt); 
cvMatMul(intrinsic, pt_rt, proj_pt); 
// convertPointsHomogenious might be used 
float scale = (float)CV_MAT_ELEM(*proj_pt, float, 2, 0); 
float x = CV_MAT_ELEM(*proj_pt, float, 0, 0)/scale; 
float y = CV_MAT_ELEM(*proj_pt, float, 1, 0)/scale; 
CvPoint2D32f img_pt = cvPoint2D32f(x, y);