2014-02-25 47 views
0

我有一個圖像,我嘗試圍繞x,y和z軸(整流)旋轉。這工作正常,但我鬆了很多數據。這是我使用的腳本:整理圖像:插補缺失點

# import libraries 
import numpy as np 
# import dateutil 
# import pyparsing 
import matplotlib.pyplot as plt 
import cv2 
import sys 
from scipy import * 
import Image 
import matrotation as rmat 
import math 
from scipy.interpolate import griddata 

# set variable with location of files 
working_dir = 'C:\Users\Yorian\Desktop\TU\Stage Shore\python_files\Rectification' 
sys.path.append(working_dir) 

# C is 3x1 matrix met (Xc, Yc, Zc).transpose() 
# neem voor nu: C is nulvector 
C = np.zeros((3,1), dtype='float32') 

# 3x3 Identity matrix 
I = np.identity(3) 

# k matrix 3x3, LOAD the center pixel automatically as the point to rate around 
K = np.array([[1.49661077e+04, -4.57744650e-13, 0.0], 
      [0.0, -1.49661077e+04, 0.0], 
      [0.0, 0.0, 1.0]]) 

# rotatie matrix 1 (3x3) 0 graden om zowel x, y als z as 
R1 = rmat.getR(25.0, 45.0, 0.0) 

# [I|-C] (Zie Sierds paper) = 
I_extended = np.hstack((I,C)) 

# P = K*R*I 
P1 = K.dot(R1).dot(I_extended) 

# rotatie matrix 2 
R2 = rmat.getR(0.0, 0.0, 0.0) 
P2 = K.dot(R2).dot(I_extended) 

# Homography Matrix = H = P_rect * pinv(P) => P2 * pinv(P1) 
H = P2.dot(np.linalg.pinv(P1)) 

# do image transform: x_uv_new = H * x_uv_original 

# load image and convert it to grayscale (L) 
img = Image.open('c5.jpg').convert('L') 

# img.show() 
img_array = np.array(img) 

height = img_array.shape[0] 
width = img_array.shape[1] 

U, V = np.meshgrid(range(img_array.shape[1]), 
        range(img_array.shape[0])) 
UV = np.vstack((U.flatten(), 
       V.flatten())).T 
UV_warped = cv2.perspectiveTransform(np.array([UV]).astype(np.float32), H) 

UV_warped = UV_warped[0] 
UV_warped = UV_warped.astype(np.int) 

x_translation = min(UV_warped[:,0]) 
y_translation = min(UV_warped[:,1]) 

new_width = np.amax(UV_warped[:,0])-np.amin(UV_warped[:,0]) 
new_height = np.amax(UV_warped[:,1])-np.amin(UV_warped[:,1]) 
# new_img_2 = cv2.warpPerspective(img_array, H, (new_height+1, new_width+1)) 

UV_warped[:,0] = UV_warped[:,0] - int(x_translation) 
UV_warped[:,1] = UV_warped[:,1] - int(y_translation) 

# create box for image 
new_img = np.zeros((new_height+1, new_width+1)) # 0 = black 255 - white background 

for uv_pix, UV_warped_pix in zip(UV, UV_warped): 
    x_orig = uv_pix[0] # x in origineel 
    y_orig = uv_pix[1] # y in origineel 
    color = img_array[y_orig, x_orig] 

    x_new = UV_warped_pix[0] # new x 
    y_new = UV_warped_pix[1] # new y 
    new_img[y_new, x_new] = np.array(color) 


img = Image.fromarray(np.uint8(new_img)) 
img.save("testje.jpg") 

這工作正常。但是我錯過了很多信息。旋轉越大,我釋放的信息就越多。要獲取更多信息,我想:插入缺失的點。我試圖做到這一點使用網格(),但它返回一個數組,看起來像這樣: [南]

此代碼: #導入庫 進口numpy的爲NP #進口dateutil #進口pyparsing 進口matplotlib.pyplot如PLT 進口CV2 進口SYS從SciPy的進口* 進口圖片 進口matrotation作爲RMAT 進口數學 從scipy.interpolate進口的GridData

# set variable with location of files 
working_dir = 'C:\Users\Yorian\Desktop\TU\Stage Shore\python_files\Rectification' 
sys.path.append(working_dir) 

# C is 3x1 matrix met (Xc, Yc, Zc).transpose() 
# neem voor nu: C is nulvector 
C = np.zeros((3,1), dtype='float32') 

# 3x3 Identity matrix 
I = np.identity(3) 

# k matrix 3x3, LOAD the center pixel automatically as the point to rate around 
K = np.array([[1.49661077e+04, -4.57744650e-13, 0.0], 
      [0.0, -1.49661077e+04, 0.0], 
      [0.0, 0.0, 1.0]]) 

# rotatie matrix 1 (3x3) 0 graden om zowel x, y als z as 
R1 = rmat.getR(25.0, 45.0, 0.0) 

# [I|-C] (Zie Sierds paper) = 
I_extended = np.hstack((I,C)) 

# P = K*R*I 
P1 = K.dot(R1).dot(I_extended) 

# rotatie matrix 2 
R2 = rmat.getR(0.0, 0.0, 0.0) 
P2 = K.dot(R2).dot(I_extended) 

# Homography Matrix = H = P_rect * pinv(P) => P2 * pinv(P1) 
H = P2.dot(np.linalg.pinv(P1)) 

# do image transform: x_uv_new = H * x_uv_original 

# load image and convert it to grayscale (L) 
img = Image.open('c5.jpg').convert('L') 

# img.show() 
img_array = np.array(img) 

height = img_array.shape[0] 
width = img_array.shape[1] 

U, V = np.meshgrid(range(img_array.shape[1]), 
        range(img_array.shape[0])) 
UV = np.vstack((U.flatten(), 
       V.flatten())).T 
UV_warped = cv2.perspectiveTransform(np.array([UV]).astype(np.float32), H) 

UV_warped = UV_warped[0] 
UV_warped = UV_warped.astype(np.int) 

x_translation = min(UV_warped[:,0]) 
y_translation = min(UV_warped[:,1]) 

new_width = np.amax(UV_warped[:,0])-np.amin(UV_warped[:,0]) 
new_height = np.amax(UV_warped[:,1])-np.amin(UV_warped[:,1]) 

UV_warped[:,0] = UV_warped[:,0] - int(x_translation) 
UV_warped[:,1] = UV_warped[:,1] - int(y_translation) 

# create box for image 
data = np.zeros((len(UV_warped),1)) 

for i, uv_pix in enumerate(UV): 
    data[i,0] = img_array[uv_pix[1], uv_pix[0]] 

grid = griddata(UV_warped, data, (new_width+1, new_height+1), method='linear') 

任何人都可以幫我從插入的圖像中獲取圖像嗎?

順便說一句:我使用了函數warpPerspective,正如有人告訴我的,但是這延伸了圖像,但沒有「旋轉」它。

我也看過cv2.inpaint(),但無法讓它工作。我發現這個:http://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_photo/py_inpainting/py_inpainting.html但它繪製它。我想製作一張照片。

編輯:

我用warpTransform做到這一點,代碼:

#Importing modules 
import json 
import urllib2 
import numpy as np 
import cv2 
from scipy import * 
import Image 

# data is now a dictionairy containing list with dictionairies with the x, y, z, U, V 
# example: 
# data[cameraID][listnumber] = {'x': x, 'y': y, 'z': z, 'U': U, 'V': V} 

T = {} # H is a list of Translation matrices, one for each camera 

for cam in data: 
    if len(cam) > 4: 
     xyz_ar = np.array([[data[cam][0]['x'], data[cam][0]['y']], 
          [data[cam][1]['x'], data[cam][1]['y']], 
          [data[cam][2]['x'], data[cam][2]['y']], 
          [data[cam][3]['x'], data[cam][3]['y']]],np.float32) 

     UV_ar = np.array([[data[cam][0]['U'], data[cam][0]['V']], 
          [data[cam][1]['U'], data[cam][1]['V']], 
          [data[cam][2]['U'], data[cam][2]['V']], 
          [data[cam][3]['U'], data[cam][3]['V']]], np.float32) 

     T[cam] = cv2.getPerspectiveTransform(UV_ar, xyz_ar) 
    else: 
     print('niet genoeg meetpunten voor de camera') 

# load image 
img = cv2.imread('c5.jpg') 
rows, cols, channels = img.shape 

# warp voor camera 5 
dst = cv2.warpPerspective(img, T[u'KDXX05C'], (rows, cols)) 
new_img = Image.fromarray(np.uint8(dst)) 
new_img.save('testje.jpg') 
+0

你好,我想是我建議使用'warpPerspective' :)你能解釋一下_clearly_你的意思是什麼「旋轉」的圖像?由於圖像始終是2D,因此對其應用3D旋轉會導致圖像拉伸。 – AldurDisciple

+0

是的。隨着你的幫助,我得到了它的工作,但圖像扭曲非常奇怪。也許在我的代碼中有些問題,但這就是我所做的(不能評論這麼長的代碼段,所以編輯我的原始帖子) – Yorian

+0

除了你錯過了最終圖像中的大量信息之外,單形'H'似乎正確地轉換圖像? – AldurDisciple

回答

1

我仍然相信,warpPerspective不正是你想要的(Jedi頭腦把戲)。嚴重的是,它應該在一行中執行你正在嘗試使用meshgridvstackgriddata來實現的目標。

你可以試試下面的代碼嗎? (我不熟悉Python,所以這可能需要一些adjustements):

# load image and convert it to grayscale (L) 
img = cv2.imread('c5.jpg') 
rows, cols, channels = img.shape 
# img.show() 

# Homography Matrix = H = P_rect * pinv(P) => P2 * pinv(P1) 
H = P2.dot(np.linalg.pinv(P1)) 

cv2.warpPerspective(img, H, (rows, cols), dst, cv2.INTER_LINEAR) 
new_img = Image.fromarray(np.uint8(dst)) 
new_img.save('testje.jpg') 

其中H是完全相同的矩陣,你給你的第一個代碼示例中使用。

+0

如果我這樣做,我得到一個空的圖像(白色或黑色取決於我用於dst的數組(顯然我需要設置它第一)) 我的代碼 #負荷圖像並將其轉換爲灰度(L) IMG = cv2.imread( 'c5.jpg') 行,COLS,通道= img.shape #img.show() (P1) #Homography Matrix = H = P_rect * pinv(P)=> P2 * pinv(P1) H = P2.dot(np.linalg.pinv(P1)) dst = np.ones((2000,2000) )* 255#白色背景 cv2.warpPerspective(img,H,(rows,cols),dst,cv2.WARP_INVERSE_MAP) new_img = Image.fromarray(np.uint8(dst)) new_img.save('testje.jpg') – Yorian

+0

如果我給你一個示例圖像(或者我想在全景圖中使用的所有6個圖像)以及一個示例輸出全景(縫合和糾正),這有助於? – Yorian

+0

不是因爲我的問題是python中的編碼,而且還有'H'和兩個圖像的約定。 'P1'是'img'的相機矩陣,對吧?如果你在我的編輯中添加'cv2.INTER_LINEAR',它會更好嗎? – AldurDisciple