2012-02-02 76 views
10

我想在iOS 5中使用CoreImage的人臉檢測,但它沒有檢測到任何東西。我想檢測剛相機使用此代碼拍攝的圖像中的人臉:iOS人臉檢測問題

- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info { 
    UIImage *image = [info objectForKey:@"UIImagePickerControllerOriginalImage"]; 
    NSDictionary *detectorOptions = [[NSDictionary alloc] initWithObjectsAndKeys:CIDetectorAccuracyHigh, CIDetectorAccuracy, nil];  
    CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:detectorOptions]; 
    NSArray *features = [faceDetector featuresInImage:image.CIImage]; 
    NSLog(@"Features = %@", features); 
    [self dismissModalViewControllerAnimated:YES]; 
} 

這編譯並運行良好,但不管什麼圖像中它的特徵陣列始終是空的...任何想法?

回答

22

我不能答覆你@ 14:52評論直接Vic320,但我一直在玩的前置攝像頭,人臉檢測 - 我轉了一圈又因爲我無法讓前置攝像頭撿起我的臉...

原來,它對旋轉非常敏感 - 我注意到,當我將iPad2放在肖像中時(正如您期望的那樣使用前置攝像頭)我的識別精度不到10%。一時興起,側身轉向,前置攝像頭獲得100%認可。

這個簡單的解決方法,如果你使用的前置攝像頭總是在肖像是加入這個小片段:

NSDictionary* imageOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:6] forKey:CIDetectorImageOrientation]; 
NSArray* features = [detector featuresInImage:image options:imageOptions]; 

這6在那裏強制在縱向模式下操作探測器。蘋果公司的SquareCam Sample有一大堆實用方法,可以根據您的需要來動態確定您的方向。

+0

今天仍然有用!絕對的輝煌 – 2017-06-24 19:25:49

5

好的,仔細閱讀文檔總是有幫助的。在UIImage文檔中,在CIImage屬性下它說:「如果使用CGImageRef初始化UIImage對象,則該屬性的值爲零。」顯然,UIImagePickerController會初始化來自CGImageRef的圖像,因爲這個屬性實際上是零。爲了讓上面的代碼工作,你需要添加:

CIImage *ciImage = [CIImage imageWithCGImage:image.CGImage]; 

和改變這一行:

NSArray *features = [faceDetector featuresInImage:ciImage]; 

我注意到另一個大的事情是從靜止圖像是人臉檢測不會對真正的工作來自前置攝像頭的低分辨率圖像!每次使用背部高分辨率相機時它都可以工作。也許算法調整爲高分辨率...

+0

非常感謝。 – 2013-10-15 08:49:15

4

請嘗試以下操作。假設你在圖像可變負載照片:

NSDictionary *options = [NSDictionary dictionaryWithObject: CIDetectorAccuracyLow forKey: CIDetectorAccuracy]; 
      CIDetector *detector = [CIDetector detectorOfType: CIDetectorTypeFace context: nil options: options]; 

     CIImage *ciImage = [CIImage imageWithCGImage: [image CGImage]]; 
     NSNumber *orientation = [NSNumber numberWithInt:[image imageOrientation]+1]; 
     NSDictionary *fOptions = [NSDictionary dictionaryWithObject:orientation forKey: CIDetectorImageOrientation]; 
      NSArray *features = [detector featuresInImage:ciImage options:fOptions]; 
      for (CIFaceFeature *f in features) { 

       NSLog(@"left eye found: %@", (f. hasLeftEyePosition ? @"YES" : @"NO")); 

       NSLog(@"right eye found: %@", (f. hasRightEyePosition ? @"YES" : @"NO")); 

       NSLog(@"mouth found: %@", (f. hasMouthPosition ? @"YES" : @"NO")); 

       if(f.hasLeftEyePosition) 

        NSLog(@"left eye position x = %f , y = %f", f.leftEyePosition.x, f.leftEyePosition.y); 

       if(f.hasRightEyePosition) 

        NSLog(@"right eye position x = %f , y = %f", f.rightEyePosition.x, f.rightEyePosition.y); 

       if(f.hasMouthPosition) 

        NSLog(@"mouth position x = %f , y = %f", f.mouthPosition.x, f.mouthPosition.y); 

      } 
0

以上爲我工作的答案無(iOS版8.4)iPad的迷你iPad的&航2

我有同樣的觀察爲robwormald。面部檢測工作時,iPad的旋轉,所以旋轉的ciImage :)

讓ciImage = CIImage(CVPixelBuffer:pixelBuffer,選擇:附件) 設角度= CGFloat的(-M_PI/2) 讓rotatedImage = ciImage.imageByApplyingTransform (CGAffineTransformMakeRotation(角度))