2012-10-25 106 views
2

我想讓我們知道蘋果的新功能AVMetadataFaceObject在我的ios 6應用程序中,可以讓您識別臉部。基本上他們想要實現的目標是創建AVCaptureMetadataOutput對象並將其設置爲現有的AVAVCaptureSession作爲輸出。 所以,我得到了這個link如何初始化AVCaptureMetadataOutput對象?

squarecam蘋果的樣本代碼,我tryied來創建這樣的對象:

CaptureObject = [[AVCaptureMetadataOutput alloc]init]; 
objectQueue =  dispatch_queue_create("VideoDataOutputQueue", NULL);//dispatch_queue_create("newQueue", NULL); 
[CaptureObject setMetadataObjectsDelegate:self queue:objectQueue]; 

我在這裏將輸入到會話:

- (void)setupAVCapture 

{ NSError * error = nil;

AVCaptureSession *session = [AVCaptureSession new]; 
if ([[UIDevice currentDevice] userInterfaceIdiom] == UIUserInterfaceIdiomPhone) 
    [session setSessionPreset:AVCaptureSessionPreset640x480]; 
else 
    [session setSessionPreset:AVCaptureSessionPresetPhoto]; 

// Select a video device, make an input 
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; 
AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error]; 
require(error == nil, bail); 

isUsingFrontFacingCamera = NO; 
if ([session canAddInput:deviceInput]) 
    [session addInput:deviceInput]; 

// Make a still image output 
stillImageOutput = [AVCaptureStillImageOutput new]; 
[stillImageOutput addObserver:self forKeyPath:@"capturingStillImage" options:NSKeyValueObservingOptionNew context:AVCaptureStillImageIsCapturingStillImageContext]; 
if ([session canAddOutput:stillImageOutput]) 
    [session addOutput:stillImageOutput ]; 
    **[session addOutput:CaptureObject];//////HERE///////** 

    // Make a video data output 
videoDataOutput = [AVCaptureVideoDataOutput new]; 

// we want BGRA, both CoreGraphics and OpenGL work well with 'BGRA' 
NSDictionary *rgbOutputSettings = [NSDictionary dictionaryWithObject: 
            [NSNumber numberWithInt:kCMPixelFormat_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]; 
[videoDataOutput setVideoSettings:rgbOutputSettings]; 
[videoDataOutput setAlwaysDiscardsLateVideoFrames:YES]; // discard if the data output queue is blocked (as we process the still image) 

// create a serial dispatch queue used for the sample buffer delegate as well as when a still image is captured 
// a serial dispatch queue must be used to guarantee that video frames will be delivered in order 
// see the header doc for setSampleBufferDelegate:queue: for more information 
videoDataOutputQueue = dispatch_queue_create("VideoDataOutputQueue", DISPATCH_QUEUE_SERIAL); 
[videoDataOutput setSampleBufferDelegate:self queue:videoDataOutputQueue]; 

if ([session canAddOutput:videoDataOutput]) 
    [session addOutput:videoDataOutput]; 
[[videoDataOutput connectionWithMediaType:AVMediaTypeVideo] setEnabled:NO]; 

effectiveScale = 1.0; 
previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session]; 
[previewLayer setBackgroundColor:[[UIColor blackColor] CGColor]]; 
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspect]; 
CALayer *rootLayer = [previewView layer]; 
[rootLayer setMasksToBounds:YES]; 
[previewLayer setFrame:[rootLayer bounds]]; 
[rootLayer addSublayer:previewLayer]; 
[session startRunning]; 

}}

所以基本上代表應該調用此方法:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection{ 

}

但不爲發生。

有什麼想法嗎?

回答

2

建議您使用元數據的主隊列。這是我能看到的唯一可能是錯誤的。

AVCaptureMetadataOutput *metadataOutput; 
metadataOutput = [AVCaptureMetadataOutput new]; 
[metadataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()]; 
[session addOutput:metadataOutput];