2012-07-30 45 views
4

我想從我指定的關鍵點計算SURF特徵。我正在使用OpenCV的Python包裝器。以下是我嘗試使用的代碼,但我無法在任何地方找到有效的示例。OpenCV:從用戶定義的關鍵點提取SURF特徵

surf = cv2.SURF() 
keypoints, descriptors = surf.detect(np.asarray(image[:,:]),None,useProvidedKeypoints = True) 

如何指定該功能要使用的關鍵點?

相似,未回答的,問題: cvExtractSURF don't work when useProvidedKeypoints = true

Documentation

+0

你奶源得到它的工作結束 ? – OddNorg 2015-08-04 14:24:59

+0

我做過了,我甚至在這裏發佈了答案,但我只注意到它由於某種原因被刪除了。奇怪。無論如何,您可以使用[Mahotas](http://luispedro.org/software/mahotas/)來執行此操作,或者查看其他同時發佈的其他答案。 – casper 2015-08-04 17:25:16

回答

1

如果我正確理解Python綁定的源代碼中,「關鍵點」的論點,即存在於C++接口從來沒有在使用Python綁定。所以我冒險不可能做你正在試圖用當前綁定來做的事情。一個可能的解決方案是編寫你自己的綁定。我知道這不是你所希望看到的答案...這如何是可以做到的

+1

我開始懷疑實際上是一樣的......我已經開始研究如何使用SURF的Python庫,例如[Python Mahotas](http://luispedro.org/software/mahotas) – casper 2012-08-02 10:01:09

+0

它不應該太很難寫你自己的綁定到你自己的自定義函數。 – 2012-08-02 10:13:17

+1

作者mahotas在這裏:mahotas可以做你想做的。 – luispedro 2012-08-06 03:20:22

0

例子中之前提到的Mahotas

import mahotas 
from mahotas.features import surf 
import numpy as np 


def process_image(imagename): 
    '''Process an image and returns descriptors and keypoints location''' 
    # Load the images 
    f = mahotas.imread(imagename, as_grey=True) 
    f = f.astype(np.uint8) 

    spoints = surf.dense(f, spacing=12, include_interest_point=True) 
    # spoints includes both the detection information (such as the position 
    # and the scale) as well as the descriptor (i.e., what the area around 
    # the point looks like). We only want to use the descriptor for 
    # clustering. The descriptor starts at position 5: 
    desc = spoints[:, 5:] 
    kp = spoints[:, :2] 

    return kp, desc 
1

嘗試使用cv2.DescriptorMatcher_create了點。

例如,在下面的代碼我使用pylab,但你可以得到的消息;)

它計算使用GFTT的關鍵點,然後使用SURF描述符和蠻力匹配。 每個代碼部分的輸出顯示爲標題。


%pylab inline 
import cv2 
import numpy as np 

img = cv2.imread('./img/nail.jpg') 
gray= cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) 
imshow(gray, cmap=cm.gray) 

輸出是這樣的http://i.stack.imgur.com/8eOTe.png

(在這個例子中,我將欺騙,並使用相同的圖像,以獲得關鍵點和描述符)。

img1 = gray 
img2 = gray 
detector = cv2.FeatureDetector_create("GFTT") 
descriptor = cv2.DescriptorExtractor_create("SURF") 
matcher = pt1=(int(k1[m.queryIdx].pt[0]),int(k1[m.queryIdx].pt[1]))("FlannBased") 

# detect keypoints 
kp1 = detector.detect(img1) 
kp2 = detector.detect(img2) 

print '#keypoints in image1: %d, image2: %d' % (len(kp1), len(kp2)) 

關鍵點在圖像1:1000,圖像2:1000

# descriptors 
k1, d1 = descriptor.compute(img1, kp1) 
k2, d2 = descriptor.compute(img2, kp2) 

print '#Descriptors size in image1: %s, image2: %s' % ((d1.shape), (d2.shape)) 

描述符尺寸在圖像1:(1000,64),圖像2:(1000,64)

# match the keypoints 
matches = matcher.match(d1,d2) 

# visualize the matches 
print '#matches:', len(matches) 
dist = [m.distance for m in matches] 

print 'distance: min: %.3f' % min(dist) 
print 'distance: mean: %.3f' % (sum(dist)/len(dist)) 
print 'distance: max: %.3f' % max(dist) 

匹配: 1000

距離:最小:0.000

距離:均值:0.000

距離:最大:0.000

# threshold: half the mean 
thres_dist = (sum(dist)/len(dist)) * 0.5 + 0.5 

# keep only the reasonable matches 
sel_matches = [m for m in matches if m.distance < thres_dist] 

print '#selected matches:', len(sel_matches) 

選擇的比賽:1000

#Plot 
h1, w1 = img1.shape[:2] 
h2, w2 = img2.shape[:2] 
view = zeros((max(h1, h2), w1 + w2, 3), uint8) 
view[:h1, :w1, 0] = img1 
view[:h2, w1:, 0] = img2 
view[:, :, 1] = view[:, :, 0] 
view[:, :, 2] = view[:, :, 0] 

for m in sel_matches: 
    # draw the keypoints 
    # print m.queryIdx, m.trainIdx, m.distance 
    color = tuple([random.randint(0, 255) for _ in xrange(3)]) 
    pt1=(int(k1[m.queryIdx].pt[0]),int(k1[m.queryIdx].pt[1])) 
    pt2=(int(k2[m.queryIdx].pt[0]+w1),int(k2[m.queryIdx].pt[1])) 
    cv2.line(view,pt1,pt2,color) 

輸出是這樣的http://i.stack.imgur.com/8CqrJ.png

+0

@casper你是否設法通過這個例子來實現它? – OddNorg 2015-08-04 14:24:05