2010-08-30 57 views
6

我有一段代碼,用於設置攝像頭捕獲會話以使用OpenCV處理幀,然後使用幀中生成的UIImage設置UIImageView的圖像屬性。當應用程序啓動時,圖像視圖的圖像爲零,並且在我將另一個視圖控制器推入堆棧然後將其彈出時沒有框架顯示出來。然後圖像保持不變,直到我再次做。 NSLog語句顯示回調以大約正確的幀速率調用。任何想法爲什麼它不顯示?我一直將幀速率降低到每秒2幀。它處理速度不夠快嗎?AVCaptureSession只爲iPhone 3gs獲取一幀

下面的代碼:

- (void)setupCaptureSession { 
    NSError *error = nil; 

    // Create the session 
    AVCaptureSession *session = [[AVCaptureSession alloc] init]; 

    // Configure the session to produce lower resolution video frames, if your 
    // processing algorithm can cope. We'll specify medium quality for the 
    // chosen device. 
    session.sessionPreset = AVCaptureSessionPresetLow; 

    // Find a suitable AVCaptureDevice 
    AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; 

    // Create a device input with the device and add it to the session. 
    AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device 
                     error:&error]; 
    if (!input) { 
     // Handling the error appropriately. 
    } 
    [session addInput:input]; 

    // Create a VideoDataOutput and add it to the session 
    AVCaptureVideoDataOutput *output = [[[AVCaptureVideoDataOutput alloc] init] autorelease]; 
    output.alwaysDiscardsLateVideoFrames = YES; 
    [session addOutput:output]; 

    // Configure your output. 
    dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL); 
    [output setSampleBufferDelegate:self queue:queue]; 
    dispatch_release(queue); 

    // Specify the pixel format 
    output.videoSettings = 
    [NSDictionary dictionaryWithObject: 
    [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] 
           forKey:(id)kCVPixelBufferPixelFormatTypeKey]; 


    // If you wish to cap the frame rate to a known value, such as 15 fps, set 
    // minFrameDuration. 
    output.minFrameDuration = CMTimeMake(1, 1); 

    // Start the session running to start the flow of data 
    [session startRunning]; 

    // Assign session to an ivar. 
    [self setSession:session]; 
} 

// Create a UIImage from sample buffer data 
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer { 
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    // Lock the base address of the pixel buffer 
    CVPixelBufferLockBaseAddress(imageBuffer,0); 

    // Get the number of bytes per row for the pixel buffer 
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    // Get the pixel buffer width and height 
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 

    // Create a device-dependent RGB color space 
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
    if (!colorSpace) 
    { 
     NSLog(@"CGColorSpaceCreateDeviceRGB failure"); 
     return nil; 
    } 

    // Get the base address of the pixel buffer 
    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); 
    // Get the data size for contiguous planes of the pixel buffer. 
    size_t bufferSize = CVPixelBufferGetDataSize(imageBuffer); 

    // Create a Quartz direct-access data provider that uses data we supply 
    CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, baseAddress, bufferSize, 
                   NULL); 
    // Create a bitmap image from data supplied by our data provider 
    CGImageRef cgImage = 
    CGImageCreate(width, 
        height, 
        8, 
        32, 
        bytesPerRow, 
        colorSpace, 
        kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little, 
        provider, 
        NULL, 
        true, 
        kCGRenderingIntentDefault); 
    CGDataProviderRelease(provider); 
    CGColorSpaceRelease(colorSpace); 

    // Create and return an image object representing the specified Quartz image 
    UIImage *image = [UIImage imageWithCGImage:cgImage]; 
    CGImageRelease(cgImage); 

    CVPixelBufferUnlockBaseAddress(imageBuffer, 0); 

    return image; 
} 


// Delegate routine that is called when a sample buffer was written 
- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
     fromConnection:(AVCaptureConnection *)connection { 
    // Create a UIImage from the sample buffer data 
    UIImage *image = [self imageFromSampleBuffer:sampleBuffer]; 
    [self.delegate cameraCaptureGotFrame:image]; 
} 

回答

6

這可能與旋擰嘗試:

[self.delegate performSelectorOnMainThread:@selector(cameraCaptureGotFrame:) withObject:image waitUntilDone:NO]; 
+0

可能。我會嘗試一下並讓你知道。 – 2010-08-30 18:00:46

+0

我收到了錯誤的訪問錯誤。 – 2010-10-01 01:39:13

+0

[self.delegate performSelectorOnMainThread:@selector(cameraCaptureGotFrame :) withObject:image waitUntilDone:YES]; 將waitUntilDone更改爲yes使其工作。現在我只需要弄清楚如何制定方向景觀。謝謝! – 2010-10-04 05:21:52

0

你每一個新的圖像屬性更新後做的一個UIImageView的setNeedsDisplay?

編輯:

在哪裏,當你更新你的圖像視圖的背景圖像的財產?

+0

試過了,沒有工作。嘗試setNeedsLayout太 – 2010-08-30 17:37:34

3

這看起來像一個線程問題。除了在主線程中,您無法在任何其他線程中更新視圖。在你的設置中,這是很好的,代理功能captureOutput:didOutputSampleBuffer:在輔助線程中被調用。所以你不能從那裏設置圖像視圖。藝術Gillespie的答案是解決它的一種方法,如果你可以擺脫不良訪問錯誤。

另一種方法是修改樣品緩衝液中captureOutput:didOutputSampleBuffer:並通過添加AVCaptureVideoPreviewLayer實例您捕獲會話中。如果您只修改圖像的一小部分(如突出顯示某些內容),那肯定是首選方式。

BTW:可能出現你的壞訪問錯誤,因爲你沒有在輔助線程保留所創建的圖像,所以它會cameraCaptureGotFrame被稱爲主線程之前被釋放。

更新: 爲了適當保留圖像,增加的引用計數在captureOutput:didOutputSampleBuffer:(在次級線程)和減少它在cameraCaptureGotFrame:(在主線程)。

// Delegate routine that is called when a sample buffer was written 
- (void)captureOutput:(AVCaptureOutput *)captureOutput 
     didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
     fromConnection:(AVCaptureConnection *)connection 
{ 
    // Create a UIImage from the sample buffer data 
    UIImage *image = [self imageFromSampleBuffer:sampleBuffer]; 

    // increment ref count 
    [image retain]; 
    [self.delegate performSelectorOnMainThread:@selector(cameraCaptureGotFrame:) 
     withObject:image waitUntilDone:NO]; 

} 

- (void) cameraCaptureGotFrame:(UIImage*)image 
{ 
    // whatever this function does, e.g.: 
    imageView.image = image; 

    // decrement ref count 
    [image release]; 
} 

如果不增加引用計數,圖像是由cameraCaptureGotFrame前的第二個線程的自動釋放池釋放:被稱爲主線程。如果您不在主線程中遞減它,則圖像永遠不會釋放,並在幾秒鐘內耗盡內存。

+0

肯定會嘗試保留在我在主線程中調用的方法中,並且很快就會通知您。 – 2010-10-04 04:22:38

+0

在Art的答案中更改一個參數。感謝輸入 – 2010-10-04 05:23:30

+0

那麼,你會看到我的代碼。如果您想出一種方法來正確保留並修復訪問錯誤,我會給您50點獎勵。 – 2010-10-06 01:55:22