2011-02-08 73 views
7

我想通過使用openCV實時進行圖像處理。使用OpenCV和AVFoundation框架的iPhone實時圖像處理?

我的最終目標是在屏幕上實時顯示結果,而另一側的相機則使用AVFoundation框架捕捉視頻。

如何通過OpenCV處理每個視頻幀,並實時在屏幕上顯示結果?

+2

現在,這個問題太廣泛了,無法回答。您需要哪些幫助:使用AVFoundation獲取視頻幀,在iPhone上編譯和使用OpenCV,或者顯示重疊的圖像?你想用OpenCV識別什麼? – 2011-02-08 18:34:02

回答

1

使用AVAssertReader

//Setup Reader 
    AVURLAsset * asset = [AVURLAsset URLAssetWithURL:urlvalue options:nil]; 
    [asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:@"tracks"] completionHandler: ^{ dispatch_async(dispatch_get_main_queue(), ^{ 
     AVAssetTrack * videoTrack = nil; 
     NSArray * tracks = [asset tracksWithMediaType:AVMediaTypeVideo]; 
     if ([tracks count] == 1) { 
      videoTrack = [tracks objectAtIndex:0]; 
      NSError * error = nil; 
      _movieReader = [[AVAssetReader alloc] initWithAsset:asset error:&error]; 
      if (error) 
       NSLog(error.localizedDescription); 
      NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; 
      NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_4444AYpCbCr16]; NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; 
      [_movieReader addOutput:[AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:videoTrack outputSettings:videoSettings]]; 
      [_movieReader startReading]; 

     } 
    }); 
    }]; 

獲得下一部電影幀

static int frameCount=0; 
- (void) readNextMovieFrame { 

    if (_movieReader.status == AVAssetReaderStatusReading) { 

     AVAssetReaderTrackOutput * output = [_movieReader.outputs objectAtIndex:0]; 
     CMSampleBufferRef sampleBuffer = [output copyNextSampleBuffer]; 
     if (sampleBuffer) { 
      CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
      // Lock the image buffer 
      CVPixelBufferLockBaseAddress(imageBuffer,0); 
      // Get information of the image 
      uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 
      size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
      size_t width = CVPixelBufferGetWidth(imageBuffer); 
      size_t height = CVPixelBufferGetHeight(imageBuffer); 

      /*We unlock the image buffer*/ 
      CVPixelBufferUnlockBaseAddress(imageBuffer,0); 

      /*Create a CGImageRef from the CVImageBufferRef*/ 
      CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
      CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
      CGImageRef newImage = CGBitmapContextCreateImage(newContext); 

      /*We release some components*/ 
      CGContextRelease(newContext); 
      CGColorSpaceRelease(colorSpace); 

      /*We display the result on the custom layer*/ 
      /*self.customLayer.contents = (id) newImage;*/ 

      /*We display the result on the image view (We need to change the orientation of the image so that the video is displayed correctly)*/ 
      UIImage *image= [UIImage imageWithCGImage:newImage scale:0.0 orientation:UIImageOrientationRight]; 
      UIGraphicsBeginImageContext(image.size); 

      [image drawAtPoint:CGPointMake(0, 0)]; 

      // UIImage *img=UIGraphicsGetImageFromCurrentImageContext(); 
      videoImage=UIGraphicsGetImageFromCurrentImageContext(); 

      UIGraphicsEndImageContext(); 


//videoImage=image; 

      // if (frameCount < 40) { 
       NSLog(@"readNextMovieFrame==%d",frameCount); 
         NSString* filename = [NSString stringWithFormat:@"Documents/frame_%d.png", frameCount]; 
         NSString* pngPath = [NSHomeDirectory() stringByAppendingPathComponent:filename]; 
        [UIImagePNGRepresentation(videoImage) writeToFile: pngPath atomically: YES]; 
        frameCount++; 
     //  } 

      CVPixelBufferUnlockBaseAddress(imageBuffer,0); 
      CFRelease(sampleBuffer); 
     } 
    } 
} 

一旦你_movieReader到達終點,那麼你需要再次重新啓動。