1
我試圖使用GPUImage實現重新映射過濾器。它類似於opencv重映射函數,它使用輸入圖像xmap和ymap。所以,我繼承了GPUImageThreeInputFilter並編寫了我自己的着色器代碼。當過濾器的輸入仍然是圖像時,我得到正確的輸出圖像。代碼如下:GPUImage:GPUImageThreeInputFilter適用於靜態圖像輸入,但不適用於相機輸入
GPUImageRemap *remapFilter=[[GPUImageRemap alloc] init];
[remapFilter forceProcessingAtSize:CGSizeMake(sphericalImageW, sphericalImageH)];
UIImage *inputImage = [UIImage imageNamed:@"test.jpg"];
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:inputImage];
[stillImageSource addTarget:remapFilter atTextureLocation:0];
GPUImagePicture *stillImageSource1 = [[GPUImagePicture alloc] initWithImage:xmapImage];
[stillImageSource1 processImage];
[stillImageSource1 addTarget:remapFilter atTextureLocation:1];
GPUImagePicture *stillImageSource2 = [[GPUImagePicture alloc] initWithImage:ymapImage];
[stillImageSource2 processImage];
[stillImageSource2 addTarget:remapFilter atTextureLocation:2];
[stillImageSource processImage];
UIImage *filteredImage=[remapFilter imageFromCurrentlyProcessedOutput];
但是,當輸入切換到相機輸入時,我得到了錯誤的輸出圖像。我做了一些調試,發現xmap和ymap沒有加載到第二個和第三個紋理。這2個紋理的像素值均爲0
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPresetHigh cameraPosition:AVCaptureDevicePositionFront];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
GPUImageRemap *remapFilter=[[GPUImageRemap alloc] init];
[remapFilter forceProcessingAtSize:CGSizeMake(sphericalImageW, sphericalImageH)];
[videoCamera addTarget:remapFilter atTextureLocation:0];
GPUImagePicture *stillImageSource1 = [[GPUImagePicture alloc] initWithImage:xmapImage];
[stillImageSource1 processImage];
[stillImageSource1 addTarget:remapFilter atTextureLocation:1];
GPUImagePicture *stillImageSource2 = [[GPUImagePicture alloc] initWithImage:ymapImage];
[stillImageSource2 processImage];
[stillImageSource2 addTarget:remapFilter atTextureLocation:2];
GPUImageView *camView = [[GPUImageView alloc] initWithFrame:self.view.bounds];
[remapFilter addTarget:camView];
[videoCamera startCameraCapture];
頭文件:
#import <GPUImage.h>
#import <GPUImageThreeInputFilter.h>
@interface GPUImageRemap : GPUImageThreeInputFilter
{
}
主要文件:
#import "GPUImageRemap.h"
NSString *const kGPUImageRemapFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
varying highp vec2 textureCoordinate3;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
uniform sampler2D inputImageTexture3;
/*
x, y map orignally store floating point numbers in [0 imageWidth] and [0 imageHeight]
then they are divided by imageWidth-1 and imageHeight-1 to be in [0 1]
then they are converted to integer by multiply 1000000
then an integer is put in the 4 byte of RGBA channel
then each unsigned byte RGBA component is clamped to [0 1] and passed to fragment shader
therefore, do the inverse in fragment shader to get original x, y coordinates
*/
void main()
{
highp vec4 xAry0_1 = texture2D(inputImageTexture2, textureCoordinate2);
highp vec4 xAry0_255=floor(xAry0_1*vec4(255.0)+vec4(0.5));
//largest integer number we may see will not exceed 2000000, so 3 bytes are enough to carry our integer values
highp float xint=xAry0_255.b*exp2(16.0)+xAry0_255.g*exp2(8.0)+xAry0_255.r;
highp float x=xint/1000000.0;
highp vec4 yAry0_1 = texture2D(inputImageTexture3, textureCoordinate3);
highp vec4 yAry0_255=floor(yAry0_1*vec4(255.0)+vec4(0.5));
highp float yint=yAry0_255.b*exp2(16.0)+yAry0_255.g*exp2(8.0)+yAry0_255.r;
highp float y=yint/1000000.0;
if (x<0.0 || x>1.0 || y<0.0 || y>1.0)
{
gl_FragColor = vec4(0,0,0,1);
}
else
{
highp vec2 imgTexCoord=vec2(y, x);
gl_FragColor = texture2D(inputImageTexture, imgTexCoord);
}
}
);
@implementation GPUImageRemap
- (id)init
{
if (!(self = [super initWithFragmentShaderFromString:kGPUImageRemapFragmentShaderString]))
{
return nil;
}
return self;
}
你可以發佈你的子類代碼嗎?我需要幫助讓我的自定義着色器工作。謝謝。 – klcjr89
今天早上我實際上得到了它的工作,而不必繼承任何東西,謝天謝地!我做了一個GPUImageThreeInputFilter的子類,並將其稱爲:GPUImageFourInputFilter,用於我正在處理的其他過濾器。 – klcjr89
我只是添加我的代碼。無論如何,你很好意識到這一點很好。 – user3348157