2012-07-03 69 views
1

因此,我一直致力於一個視頻捕獲項目,該項目允許用戶捕獲圖像和視頻並應用濾鏡。我使用AVfoundation框架,我成功捕獲靜止圖像,並捕獲視頻幀爲UIImage對象...剩下的唯一事情就是錄製視頻。放養UIImages然後轉換爲電影

這裏是我的代碼:

- (void)initCapture { 

    AVCaptureSession *session = [[AVCaptureSession alloc] init]; 
    session.sessionPreset = AVCaptureSessionPresetMedium; 



    AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; 

    NSError *error = nil; 
    AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error]; 
    if (!input) { 
     // Handle the error appropriately. 
     NSLog(@"ERROR: trying to open camera: %@", error); 
    } 
    [session addInput:input]; 




    stillImageOutput = [[AVCaptureStillImageOutput alloc] init]; 
    NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil]; 
    [stillImageOutput setOutputSettings:outputSettings]; 
    [session addOutput:stillImageOutput]; 


    captureOutput = [[AVCaptureVideoDataOutput alloc] init]; 
    captureOutput.alwaysDiscardsLateVideoFrames = YES; 


    dispatch_queue_t queue; 
    queue = dispatch_queue_create("cameraQueue", NULL); 
    [captureOutput setSampleBufferDelegate:self queue:queue]; 
    dispatch_release(queue); 

    NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; 

    NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; 

    NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; 
    [captureOutput setVideoSettings:videoSettings];  

    [session addOutput:captureOutput]; 

    [session startRunning];  
} 




- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
     fromConnection:(AVCaptureConnection *)connection 
{ 
    NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; 

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 

    CVPixelBufferLockBaseAddress(imageBuffer,0); 
    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 
     CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
    CGImageRef newImage = CGBitmapContextCreateImage(newContext); 


    CGContextRelease(newContext); 
    CGColorSpaceRelease(colorSpace); 


    UIImage *image= [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight]; 

    CGImageRelease(newImage); 

    UIImage *ima = [filter applyFilter:image]; 

    /*if(isRecording == YES) 
    { 
     [imageArray addObject:ima]; 
    } 
    NSLog(@"Count= %d",imageArray.count);*/ 

    [self.imageView performSelectorOnMainThread:@selector(setImage:) withObject:ima waitUntilDone:YES]; 


    CVPixelBufferUnlockBaseAddress(imageBuffer,0); 

    [pool drain]; 

} 

我想在一個可變數組備貨UIImages但是這是一個愚蠢的想法。 有什麼想法? 任何幫助將不勝感激

回答

1

您是否使用CIFilter?如果沒有,也許你應該期待它的迅速,基於GPU的轉換。

您可能想要在生成後直接將相應的圖像記錄到AVAssetWriter。請看Apple提供的RosyWriter示例代碼,瞭解他們的做法。總之,他們利用AVAssetWriter將幀捕獲到臨時文件,然後在完成時將該文件存儲到相機。

但是,一個警告是RosyWriter在我的第四代iPod touch上獲得了4fps。他們正在煞費力氣地改變CPU上的像素。核心圖像基於GPU的過濾器,我能夠達到12fps,在我看來,它仍然不是它應該的。

祝你好運!

相關問題