2014-10-28 363 views
1

我從攝像機獲取圖像,無法拍攝棋盤圖像並使用OpenCV計算校正矩陣。到目前爲止,我使用imagemagick轉換使用選項'-distort Barrel「0.0 0.0 -0.035 1.1」'來校正圖像,其中我得到了試驗和錯誤的參數。手動校正OpenCV桶形失真,沒有棋盤圖像

現在我想在OpenCV中做到這一點,但我在網上找到的是使用棋盤圖像的自動更正。有沒有機會像我用imagemagick一樣應用一些簡單的手動試錯鏡頭畸變校正?

+0

在例子。只需跳過計算並明確提供這些參數。如果封裝了這部分,您可以查看oprnCV源代碼並使用內部使用的函數 – Micka 2014-10-28 07:47:26

+0

我試圖自己定義棋盤角(例如4x4點),但沒有得到角陣列的結構應該是什麼。有沒有人有任何想法? – 2014-10-28 08:00:27

+0

好吧,我認爲最小的數組應該像這樣配置:corners = np.zeros((4 * 4,1,2),dtype =「float32」)。用3x3它似乎不工作。不過,我寧願像convert -distort Barrel這樣的東西,而不是現在定義扭曲的點。 – 2014-10-28 08:22:41

回答

2

如果你沒有棋盤圖案但是你知道失真係數,這裏有一個方法可以使圖像不失真。

因爲我不知道哪個係數的桶形畸變參數對應(也許在http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibration.html看看和http://docs.opencv.org/modules/imgproc/doc/geometric_transformations.html#initundistortrectifymap你將不得不嘗試一下,也許有人能夠幫助你在這裏。

還有一點就是我不能確定的OpenCV是否會同時處理,浮動和自動加倍如果不是這種情況有可能是在這個代碼中的錯誤(我不知道單或雙精度是否假設):

cv::Mat distCoeff; 
distCoeff = cv::Mat::zeros(8,1,CV_64FC1); 

// indices: k1, k2, p1, p2, k3, k4, k5, k6 
// TODO: add your coefficients here! 
double k1 = 0; 
double k2 = 0; 
double p1 = 0; 
double p2 = 0; 
double k3 = 0; 
double k4 = 0; 
double k5 = 0; 
double k6 = 0; 

distCoeff.at<double>(0,0) = k1; 
distCoeff.at<double>(1,0) = k2; 
distCoeff.at<double>(2,0) = p1; 
distCoeff.at<double>(3,0) = p2; 
distCoeff.at<double>(4,0) = k3; 
distCoeff.at<double>(5,0) = k4; 
distCoeff.at<double>(6,0) = k5; 
distCoeff.at<double>(7,0) = k6; 




// assume unit matrix for camera, so no movement 
cv::Mat cam1,cam2; 
cam1 = cv::Mat::eye(3,3,CV_32FC1); 
cam2 = cv::Mat::eye(3,3,CV_32FC1); 
//cam2.at<float>(0,2) = 100; // for testing a translation 

// here the undistortion will be computed 
cv::Mat map1, map2; 
cv::initUndistortRectifyMap(cam1, distCoeff, cv::Mat(), cam2, input.size(), CV_32FC1, map1, map2); 

cv::Mat distCorrected; 
cv::remap(input, distCorrected, map1, map2, cv::INTER_LINEAR); 
+0

k4,k5和k6從哪裏來?它不是文檔的一部分,是嗎?是否還有一個Python代碼片段? – 2014-10-28 15:09:44

+0

如果我按照你的鏈接,我看到類似這樣的東西:xcor = x *(1 + k1 * r^2 + k2 * r^4 + k3 * r^6);這與以下公式適用的imagemagick(http://www.imagemagick.org/Usage/distorts/#barrel)中的描述不同:Rsrc = r *(A * r^3 + B * r^2 + C * r + D);因此將例如k1設置爲0.001會導致非常奇怪的圖像,但不會校正桶形失真?儘管如此,我認爲我們更接近解決方案。 – 2014-10-28 15:31:11

+0

抱歉,我這邊沒有python代碼片段,但它只是關於initUndistortRectifyMap調用和設置係數。維基百科給出了一些與OpenCV類似的係數(雖然沒有檢查這些術語)http://en.wikipedia.org/wiki/Distortion_%28optics%29#Software_correction因此,也許imageMagick使用不同的失真模型,或者他們會假設某種反向參數?!?不知道,抱歉=) – Micka 2014-10-28 15:52:29

6

好,我想我知道了,在矩陣cam1,cam2中,圖像中心缺失了(參見doc umentation)。我添加了它並改變了焦距以避免圖像尺寸變化太大。這裏是代碼:

import numpy as np 
    import cv2 

    src = cv2.imread("distortedImage.jpg") 
    width = src.shape[1] 
    height = src.shape[0] 

    distCoeff = np.zeros((4,1),np.float64) 

    # TODO: add your coefficients here! 
    k1 = -1.0e-5; # negative to remove barrel distortion 
    k2 = 0.0; 
    p1 = 0.0; 
    p2 = 0.0; 

    distCoeff[0,0] = k1; 
    distCoeff[1,0] = k2; 
    distCoeff[2,0] = p1; 
    distCoeff[3,0] = p2; 

    # assume unit matrix for camera 
    cam = np.eye(3,dtype=np.float32) 

    cam[0,2] = width/2.0 # define center x 
    cam[1,2] = height/2.0 # define center y 
    cam[0,0] = 10.  # define focal length x 
    cam[1,1] = 10.  # define focal length y 

    # here the undistortion will be computed 
    dst = cv2.undistort(src,cam,distCoeff) 

    cv2.imshow('dst',dst) 
    cv2.waitKey(0) 
    cv2.destroyAllWindows() 

非常感謝您的支持。

+0

很高興聽到。如果你已經測試過它並且工作,afaik你可以接受你自己的答案來幫助其他人尋找相同問題的解決方案 – Micka 2014-10-28 21:14:04

1

這是一個互補的功能undistort,可能會更快,更好的方式來做到這一點,但它的工作原理:利用棋盤他們應該隨時隨地計算失真參數

void distort(const cv::Mat& src, cv::Mat& dst, const cv::Mat& cameraMatrix, const cv::Mat& distCoeffs) 
{ 

    cv::Mat distort_x = cv::Mat(src.size(), CV_32F); 
    cv::Mat distort_y = cv::Mat(src.size(), CV_32F); 

    cv::Mat pixel_locations_src = cv::Mat(src.size(), CV_32FC2); 

    for (int i = 0; i < src.size().height; i++) { 
    for (int j = 0; j < src.size().width; j++) { 
     pixel_locations_src.at<cv::Point2f>(i,j) = cv::Point2f(j,i); 
    } 
    } 

    cv::Mat fractional_locations_dst = cv::Mat(src.size(), CV_32FC2); 

    cv::undistortPoints(pixel_locations_src, pixel_locations_dst, cameraMatrix, distCoeffs); 

    cv::Mat pixel_locations_dst = cv::Mat(src.size(), CV_32FC2); 

    const float fx = cameraMatrix.at<double>(0,0); 
    const float fy = cameraMatrix.at<double>(1,1); 
    const float cx = cameraMatrix.at<double>(0,2); 
    const float cy = cameraMatrix.at<double>(1,2); 

    // is there a faster way to do this? 
    for (int i = 0; i < fractional_locations_dst.size().height; i++) { 
    for (int j = 0; j < fractional_locations_dst.size().width; j++) { 
     const float x = fractional_locations_dst.at<cv::Point2f>(i,j).x*fx + cx; 
     const float y = fractional_locations_dst.at<cv::Point2f>(i,j).y*fy + cy; 
     pixel_locations_dst.at<cv::Point2f>(i,j) = cv::Point2f(x,y); 
    } 
    } 

    cv::remap(src, dst, pixel_locations_dst, cv::Mat(), CV_INTER_LINEAR); 
}