2011-05-19 21 views
6

我有兩個成功的概念,我想現在合併。我成功地將CATextLayer分層到CVImageBufferRef相機框架上,然後使用AVAssetWriterInputPixelBufferAdaptor通過AVAssetWriter將其保存。我這樣做是這樣的:圖層OpenGL內容通過AVCaptureSession CVImageBufferRef來自攝像機

- (void) processNewBuffer:(CVImageBufferRef)cameraFrame { 
    NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; 

    //  
    // //update CALayer on main queue 
    // //UIKit is not thread safe 
    dispatch_sync(dispatch_get_main_queue(), ^{ 
     [self updateCALayer]; 
    }); 

    if(recorder.recording) { 

     CVPixelBufferLockBaseAddress(cameraFrame,0); 

     // do stuff with buffer here 
     uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(cameraFrame); 
     size_t bytesPerRow = CVPixelBufferGetBytesPerRow(cameraFrame); 
     width = CVPixelBufferGetWidth(cameraFrame); 
     height = CVPixelBufferGetHeight(cameraFrame); 

     /*Create a CGImageRef from the CVImageBufferRef*/ 
     CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
     CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 

     [textLayer renderInContext:newContext]; 

     [recorder appendPixelBuffer:cameraFrame withPresentationTime:camera.lastSampleTime]; 

     CVPixelBufferUnlockBaseAddress(cameraFrame,0); 

     /*We release some components*/ 
     CGContextRelease(newContext); 
     CGColorSpaceRelease(colorSpace); 
    } 

    [pool drain]; 
} 

這就像一個魅力。現在是我的第二招。多虧了這個問題的答案:

CMSampleBuffer from OpenGL for video output with AVAssestWritter

我可以從WWDC 2010的示例代碼修改了OpenGL茶壺例子並保存渲染的內容在iPhone上一個電影文件。

現在,我想要的是能夠將茶壺層疊在相機框架的一個角落,並將該包保存到電影中。我遇到的問題是基本的C東西。當一個緩衝區是1280x720(相機框架)並且茶壺位於320x320的緩衝區時,如何從一個緩衝區複製到另一個緩衝區。另一個考慮因素是速度。爲了處理30fps,我無法移入或移出CGImageRef或UIImage類。這必須儘快發生。什麼是完成這個最好的方法?

回答