2011-01-11 33 views
5

我想修改CMSampleBuffer的內容,然後用AVAssetWriter/AVAssetWriterInput將其寫入文件。修改CMSampleBuffer內容的最有效方法

我這樣做的方式是創建一個核心圖形位圖上下文,然後繪製到它,但它太慢了。具體來說,我需要在緩衝區中繪製圖像。

那麼人們可以提供某種提示或建議如何更有效地做到這一點?

我想過使用OpenGL來完成這個任務,也就是先從CMSampleBuffer創建一個紋理A.然後將從我想要繪製的圖像創建的紋理B渲染到紋理A中,然後從OpenGL中檢索支持紋理A的數據,最後將該數據交給AVAssetWriter/AVAssetWriterInput。但文檔稱,將紋理數據從GPU傳輸回CPU相當昂貴。

那麼,關於如何解決這個問題的任何建議?

在此先感謝

回答

8

OpenGL可能是要走的路。不過,渲染到屏幕外幀緩衝區而不是紋理可能會稍微高效一些。

從樣品提取緩衝紋理:

// Note the caller is responsible for calling glDeleteTextures on the return value. 
- (GLuint)textureFromSampleBuffer:(CMSampleBufferRef)sampleBuffer { 
    GLuint texture = 0; 

    glGenTextures(1, &texture); 
    glBindTexture(GL_TEXTURE_2D, texture); 
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); 
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); 
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); 
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); 

    CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    CVPixelBufferLockBaseAddress(pixelBuffer, 0); 
    int width = CVPixelBufferGetWidth(pixelBuffer); 
    int height = CVPixelBufferGetHeight(pixelBuffer); 
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(pixelBuffer)); 
    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0); 

    return texture; 
} 

要處理通過OpenGL的紋理,那麼你可以做這樣的事情:

// This function exists to free the malloced data when the CGDataProviderRef is 
// eventually freed. 
void dataProviderFreeData(void *info, const void *data, size_t size){ 
    free((void *)data); 
} 

// Returns an autoreleased CGImageRef. 
- (CGImageRef)processTexture:(GLuint)texture width:(int)width height:(int)height { 
    CGImageRef newImage = NULL; 

    // Set up framebuffer and renderbuffer. 
    GLuint framebuffer; 
    glGenFramebuffers(1, &framebuffer); 
    glBindFramebuffer(GL_FRAMEBUFFER, framebuffer); 

    GLuint colorRenderbuffer; 
    glGenRenderbuffers(1, &colorRenderbuffer); 
    glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer); 
    glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8_OES, width, height); 
    glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderbuffer); 

    GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER); 
    if (status != GL_FRAMEBUFFER_COMPLETE) { 
     NSLog(@"Failed to create OpenGL frame buffer: %x", status); 
    } else { 
     glViewport(0, 0, width, height); 
     glClearColor(0.0,0.0,0.0,1.0); 
     glClear(GL_COLOR_BUFFER_BIT); 

     // Do whatever is necessary to actually draw the texture to the framebuffer 
     [self renderTextureToCurrentFrameBuffer:texture]; 

     // Read the pixels out of the framebuffer 
     void *data = malloc(width * height * 4); 
     glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data); 

     // Convert the data to a CGImageRef. Note that CGDataProviderRef takes 
     // ownership of our malloced data buffer, and the CGImageRef internally 
     // retains the CGDataProviderRef. Hence the callback above, to free the data 
     // buffer when the provider is finally released. 
     CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, data, width * height * 4, dataProviderFreeData); 
     CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB(); 
     newImage = CGImageCreate(width, height, 8, 32, width*4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast, dataProvider, NULL, true, kCGRenderingIntentDefault); 
     CFRelease(dataProvider); 
     CGColorSpaceRelease(colorspace); 

     // Autorelease the CGImageRef 
     newImage = (CGImageRef)[NSMakeCollectable(newImage) autorelease]; 
    } 

    // Clean up the framebuffer and renderbuffer. 
    glDeleteRenderbuffers(1, &colorRenderbuffer); 
    glDeleteFramebuffers(1, &framebuffer); 

    return newImage; 
}