2015-02-05 27 views
2

首先匹配的圖片我是相當新的搭配技巧,以便和我一起承擔:與OpenCV的

我對訓練圖像匹配收集的圖像(單細胞樣品)的申請工作。

我已經使用SIFT檢測器和SURF檢測器與基於FLANN的匹配來匹配一組訓練數據到收集的圖像。但是我得到的結果非常糟糕。我使用相同的代碼在OpenCV的文檔:

void foramsMatching(Mat img_object, Mat img_scene){ 
    int minHessian = 400; 

    SiftFeatureDetector detector(minHessian); 

    std::vector<KeyPoint> keypoints_object, keypoints_scene; 

    detector.detect(img_object, keypoints_object); 
    detector.detect(img_scene, keypoints_scene); 

    //-- Step 2: Calculate descriptors (feature vectors) 
    SurfDescriptorExtractor extractor; 

    Mat descriptors_object, descriptors_scene; 

    extractor.compute(img_object, keypoints_object, descriptors_object); 
    extractor.compute(img_scene, keypoints_scene, descriptors_scene); 

    //-- Step 3: Matching descriptor vectors using FLANN matcher 

    FlannBasedMatcher matcher; 
    //BFMatcher matcher; 
    std::vector<DMatch> matches; 
    matcher.match(descriptors_object, descriptors_scene, matches); 


    double max_dist = 0; double min_dist = 100; 

    //-- Quick calculation of max and min distances between keypoints 
    for (int i = 0; i < descriptors_object.rows; i++) 
    { 
     double dist = matches[i].distance; 
     if (dist < min_dist) min_dist = dist; 
     if (dist > max_dist) max_dist = dist; 
    } 

    printf("-- Max dist : %f \n", max_dist); 
    printf("-- Min dist : %f \n", min_dist); 

    //-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist) 
    std::vector<DMatch> good_matches; 

    for (int i = 0; i < descriptors_object.rows; i++) 
    { 
     if (matches[i].distance < 3 * min_dist) 
     { 
      good_matches.push_back(matches[i]); 
     } 
    } 

    Mat img_matches; 
    drawMatches(img_object, keypoints_object, img_scene, keypoints_scene, 
    good_matches, img_matches, Scalar::all(-1), Scalar::all(-1), 
    vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS); 

    //-- Localize the object 
    std::vector<Point2f> obj; 
    std::vector<Point2f> scene; 

    for (int i = 0; i < good_matches.size(); i++) 
    { 
     //-- Get the keypoints from the good matches 
     obj.push_back(keypoints_object[good_matches[i].queryIdx].pt); 
     scene.push_back(keypoints_scene[good_matches[i].trainIdx].pt); 
    } 

    Mat H = findHomography(obj, scene, CV_RANSAC); 

    //-- Get the corners from the image_1 (the object to be "detected") 
    std::vector<Point2f> obj_corners(4); 
    obj_corners[0] = cvPoint(0, 0); obj_corners[1] = cvPoint(img_object.cols, 0); 
    obj_corners[2] = cvPoint(img_object.cols, img_object.rows); obj_corners[3] = cvPoint(0, img_object.rows); 
    std::vector<Point2f> scene_corners(4); 

    perspectiveTransform(obj_corners, scene_corners, H); 

    //-- Draw lines between the corners (the mapped object in the scene - image_2) 
    line(img_matches, scene_corners[0] + Point2f(img_object.cols, 0), scene_corners[1] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4); 
    line(img_matches, scene_corners[1] + Point2f(img_object.cols, 0), scene_corners[2] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4); 
    line(img_matches, scene_corners[2] + Point2f(img_object.cols, 0), scene_corners[3] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4); 
    line(img_matches, scene_corners[3] + Point2f(img_object.cols, 0), scene_corners[0] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4); 

    //-- Show detected matches 
    namedWindow("Good Matches & Object detection", CV_WINDOW_NORMAL); 
    imshow("Good Matches & Object detection", img_matches); 
    //imwrite("../../Samples/Matching.jpg", img_matches); 
} 

下面是結果 - Matching Two Images

他們是真正的窮人相比,我已經使用這些方法看到了一些其他的結果。應該有兩個匹配到屏幕底部的兩個斑點(單元格)。

任何想法,我做錯了或如何改善這些結果? 我正在考慮編寫我自己的Matcher/Discription Extractor,因爲我的訓練圖像不是我正在查詢的細胞的精確副本。 這是一個好主意嗎?如果是這樣,我應該看的任何教程?

問候,

+1

也許有任何額外的知識可以用來消除噪音?在您提供的圖片中,背景和文字似乎很容易移除。 – runDOSrun 2015-02-05 14:14:31

+0

如果我理解正確,您建議嘗試僅匹配底部的特定區域而不匹配最新的圖片?我會嘗試並報告回來:)順便說一句,你會如何去除它們? – Nimrodshn 2015-02-05 14:17:22

+0

當然,我認爲引入更多關於對象的知識可以消除誤報。要做到這一點,你可以舉例來說與規則相匹配的點和麪積(大小/關係/顏色等) – runDOSrun 2015-02-05 14:19:44

回答

0

轉換評論回答:

你應該申請使用可用的知識,你才能找到感興趣的地區和消除噪音運行SIFT/SURF之前某種預處理。這裏的總體思路:

  1. 進行分割的具體標準(*)
  2. 檢查段和選擇有趣的候選人。
  3. 在候選片段上執行匹配。

(*)您可用於此步驟的內容有面積大小,形狀,顏色分佈等。根據您提供的示例,它可以例如可以看到你的物體是圓形的並且具有一定的最小尺寸。使用任何知識來消除進一步的誤報。當然,您需要進行一些調整,以便使您的規則集不會過於嚴格,即保持真實的優點。