2015-10-12 204 views
5

我試圖使用GPUImageAVVideoCompositing的組合來實現兩個視頻之間的實時色度鍵過濾器。這樣做使用CIImageimageFromCVPixelBufferCGImageGPUImageCGImageCIImageCVPixelBuffer,這是非常低效的並導致記憶問題。與AVVideoCompositing一起使用GPUImage

我注意到在GPUImage框架中有紋理對象,渲染目標和幀緩衝區。我希望能夠在iOS中利用CVOpenGLESTextureCacheCreateTextureFromImage將所有內容保留在GPU上。

我不覺得我很理解框架的內部運作,因爲我認爲我可以建立一個過濾鏈GPUImageTextureInput對象,然後得到過濾的renderTarget,這是一個CVPixelBufferRef。下面的renderTarget總是零,調用imageFromCurrentFrameBuffer會使我產生一個灰色框,這不是我的圖像。

請注意,下面的示例不是色度鍵,而是單個視頻上的簡單亮度過濾器,以試圖證明該概念。

@implementation MyCustomCompositor : NSObject <AVVideoCompositing> 

- (instancetype)init 
{ 
    self = [super init]; 
    if (self) { 
     CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, [GPUImageContext sharedImageProcessingContext].context, NULL, &_textureCache); 
    } 
    return self; 
} 

- (NSDictionary<NSString *,id> *)requiredPixelBufferAttributesForRenderContext 
{ 
    return @{(NSString *)kCVPixelBufferPixelFormatTypeKey : @[@(kCVPixelFormatType_32BGRA)], 
      (NSString *)kCVPixelBufferOpenGLCompatibilityKey : @YES}; 
} 

- (NSDictionary<NSString *,id> *)sourcePixelBufferAttributes 
{ 
    return @{(NSString *)kCVPixelBufferPixelFormatTypeKey : @[@(kCVPixelFormatType_32BGRA)], 
      (NSString *)kCVPixelBufferOpenGLCompatibilityKey : @YES}; 
} 

- (void)startVideoCompositionRequest:(AVAsynchronousVideoCompositionRequest *)asyncVideoCompositionRequest 
{ 
    @autoreleasepool { 
     CVPixelBufferRef mePixelBuffer = [asyncVideoCompositionRequest sourceFrameByTrackID:200]; 
     CVPixelBufferLockBaseAddress(mePixelBuffer, kCVPixelBufferLock_ReadOnly); 

     CVOpenGLESTextureRef meTextureRef = NULL; 
     size_t width = CVPixelBufferGetWidth(mePixelBuffer); 
     size_t height = CVPixelBufferGetHeight(mePixelBuffer); 
     CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, _textureCache, mePixelBuffer, NULL, GL_TEXTURE_2D, GL_BGRA, (int)width, (int)height, GL_BGRA, GL_UNSIGNED_BYTE, 0, &meTextureRef); 

     GPUImageTextureInput *meTextureInput = [[GPUImageTextureInput alloc] initWithTexture:CVOpenGLESTextureGetName(meTextureRef) size:CGSizeMake(width, height)]; 

     GPUImageBrightnessFilter *filter = [[GPUImageBrightnessFilter alloc] init]; 
     filter.brightness = 0.5; 
     [meTextureInput addTarget:filter]; 

     [filter setFrameProcessingCompletionBlock:^(GPUImageOutput *imageOutput, CMTime time) { 
      [asyncVideoCompositionRequest finishWithComposedVideoFrame:((GPUImageBrightnessFilter *)imageOutput).renderTarget]; 
     }]; 

     [meTextureInput processTextureWithFrameTime:kCMTimeZero]; 

     CFRelease(meTextureRef); 
     CVOpenGLESTextureCacheFlush(_textureCache, 0); 

     CVPixelBufferUnlockBaseAddress(mePixelBuffer, kCVPixelBufferLock_ReadOnly); 
    } 
} 

我沒有使用GPUMovieWriter或視頻的API在GPUImage因爲我需要我的作文的更精細的控制。構圖可以由多個色度鍵指令組成,這些指令在不同的時間範圍引用不同的綠色視頻疊加,並且在我看來,GPUImage中的電影API僅限於過濾整個視頻文件。我還需要作曲的能力來操作音軌和混音。

我試過用自定義着色器在GL中完成所有這些工作,但我想我會利用現有的框架來完成我想要做的事情。

回答

0

我寫了一個名爲GPUImageFrameInput從GPUImageMovie修改的類。 CVPixelBufferRef是它的輸入。 它是這樣說:

  1. 包裝sourcePixelBuffer到GPUImageFrameOutput
  2. 創建GPUImageFrameInput包裹destinationPixelBuffer
  3. 完成過濾器鏈

這裏是關鍵代碼。

// output 
// wrap the sourcePixelBuffer from the request 
// it's modified from GPUImageVideoCamera 
@interface GPUImageFrameOutput() { 

} 
-(void)processSourcePixelBuffer:(CVPixelBufferRef)pixelBuffer withSampleTime:(CMTime)currentSampleTime; 
@end 

@implement GPUImageFrameOutput() 

-(void)processSourcePixelBuffer:(CVPixelBufferRef)pixelBuffer withSampleTime:(CMTime)currentSampleTime { 
    runSynchronouslyOnVideoProcessingQueue(^{ 
     [GPUImageContext useImageProcessingContext]; 

     int bufferHeight = (int) CVPixelBufferGetHeight(movieFrame); 
     int bufferWidth = (int) CVPixelBufferGetWidth(movieFrame); 

     if (bufferHeight == 0 || bufferWidth == 0) { 
      return; 
     } 

     // almost same as 
     // [GPUImageVideoCamera processVideoSampleBuffer:] 
     // 
} 

@end 

// input 
// wrap the destinationPixelBuffer 
@interface GPUImageFrameInput() { 
    CVPixelBufferRef targetBuffer; 
    // ... others 
} 
@end 

- (void)setPixelBuffer:(CVPixelBufferRef)buffer{ 
    targetBuffer = buffer; 
} 

- (CVOpenGLESTextureRef)createDataFBO { 
    if (!movieFramebuffer) { 
     glActiveTexture(GL_TEXTURE1); 
     glGenFramebuffers(1, &movieFramebuffer); 
     glBindFramebuffer(GL_FRAMEBUFFER, movieFramebuffer); 
    } 

    glBindFramebuffer(GL_FRAMEBUFFER, movieFramebuffer); 
    glViewport(0, 0, (int)_videoSize.width, (int)_videoSize.height); 

    CVOpenGLESTextureRef renderTexture = nil; 

    if ([GPUImageContext supportsFastTextureUpload]) { 
     CVBufferSetAttachment(targetBuffer, kCVImageBufferColorPrimariesKey, kCVImageBufferColorPrimaries_ITU_R_709_2, kCVAttachmentMode_ShouldPropagate); 
     CVBufferSetAttachment(targetBuffer, kCVImageBufferYCbCrMatrixKey, kCVImageBufferYCbCrMatrix_ITU_R_601_4, kCVAttachmentMode_ShouldPropagate); 
     CVBufferSetAttachment(targetBuffer, kCVImageBufferTransferFunctionKey, kCVImageBufferTransferFunction_ITU_R_709_2, kCVAttachmentMode_ShouldPropagate); 

     CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault, [[GPUImageContext sharedImageProcessingContext] coreVideoTextureCache], 
                targetBuffer, 
                NULL, // texture attributes 
                GL_TEXTURE_2D, 
                GL_RGBA, // opengl format 
                (int)CVPixelBufferGetWidth(targetBuffer), 
                (int)CVPixelBufferGetHeight(targetBuffer), 
                GL_BGRA, // native iOS format 
                GL_UNSIGNED_BYTE, 
                0, 
                &renderTexture); 

     glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture)); 
     glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); 
     glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); 

     glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0); 
    } 
    else 
    { 
    //... 
    } 
    GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER); 
    NSAssert(status == GL_FRAMEBUFFER_COMPLETE, @"Incomplete filter FBO: %d", status); 

    return renderTexture; 
} 

,那麼你可以創建gpuimage鏈和別人一樣

[frameInput setPixelBuffer:destinationPixelBuffer]; 

for (...) { 
    GPUImageMovieFrameOutput *output ... 
    [output addTarget:filter atTextureLocation:index]; 
} 
[filter addTarget:frameInput]; 

希望有幫助!

+0

我會給這個鏡頭!謝謝! – dhenke

+0

我對此有點困惑。這看起來像GPUImageMovie中的代碼幾乎沒有問題,這很好,但如何將它適應GPUImageOutput子類?對createDataFBO的調用如何發生?或者我錯過了關於如何安排這個子類的一些內容?我假設我最終會調用createDataFBO並將該ref分配給某些東西? – dhenke

+0

對不起。我忘了給輸出。 :P讓我解決它。 – econi