2012-10-17 58 views
3

編輯發現這個代碼,前置攝像頭圖像幫助http://blog.logichigh.com/2008/06/05/uiimage-fix/iOS的人臉檢測器方向和設置

希望其他人也有過類似的問題,可以幫助我。尚未找到解決方案。 (這可能看起來有點長,但只是一堆幫手代碼)

我在相機(正面和背面)獲取的圖像上使用ios臉部檢測器以及圖庫(我正在使用UIImagePicker - 用於通過相機捕獲圖像和從圖庫中選擇圖像 - 不會使用avfoundation拍攝像在Squarecam演示中的圖像)

我真的弄糟了檢測的座標(如果有的話),所以我寫了一個簡短的調試方法來獲得面部的邊界以及在它們上方繪製正方形的實用程序,並且我想檢查檢測器正在工作的方向:

#define RECTBOX(R) [NSValue valueWithCGRect:R] 
- (NSArray *)detectFaces:(UIImage *)inputimage 
{ 
    _detector = \[CIDetector detectorOfType:CIDetectorTypeFace context:nil options:\[NSDictionary dictionaryWithObject:CIDetectorAccuracyLow forKey:CIDetectorAccuracy\]\]; 
    NSNumber *orientation = \[NSNumber numberWithInt:\[inputimage imageOrientation\]\]; // i also saw code where they add +1 to the orientation 
    NSDictionary *imageOptions = \[NSDictionary dictionaryWithObject:orientation forKey:CIDetectorImageOrientation\]; 

    CIImage* ciimage = \[CIImage imageWithCGImage:inputimage.CGImage options:imageOptions\]; 


    // try like this first 
    // NSArray* features = \[self.detector featuresInImage:ciimage options:imageOptions\]; 
    // if not working go on to this (trying all orientations) 
    NSArray* features; 

    int exif; 
    // ios face detector. trying all of the orientations 
    for (exif = 1; exif <= 8 ; exif++) 
    { 
     NSNumber *orientation = \[NSNumber numberWithInt:exif\]; 

     NSDictionary *imageOptions = \[NSDictionary dictionaryWithObject:orientation forKey:CIDetectorImageOrientation\]; 

     NSTimeInterval start = \[NSDate timeIntervalSinceReferenceDate\]; 

     features = \[self.detector featuresInImage:ciimage options:imageOptions\]; 

     if (features.count > 0) 
     { 
      NSString *str = \[NSString stringWithFormat:@"found faces using exif %d",exif\]; 
        \[faceDetection log:str\]; 
      break; 
     } 
     NSTimeInterval duration = \[NSDate timeIntervalSinceReferenceDate\] - start; 
     NSLog(@"faceDetection: facedetection total runtime is %f s",duration); 
    } 
    if (features.count > 0) 
    { 
     [faceDetection log:@"-I- Found faces with ios face detector"]; 
     for(CIFaceFeature *feature in features) 
     { 
      CGRect rect = feature.bounds; 
      CGRect r = CGRectMake(rect.origin.x,inputimage.size.height - rect.origin.y - rect.size.height,rect.size.width,rect.size.height); 
      [returnArray addObject:RECTBOX(r)]; 
     } 
     return returnArray; 
    } else { 
     // no faces from iOS face detector. try OpenCV detector 
    } 

[1]

嘗試大量不同的照片後,我注意到人臉檢測器方向與相機圖像屬性不一致。我從前置攝像頭 拍攝了一堆照片,其中圖像取向爲3(查詢圖像的方向),但人臉檢測器未找到該設置的臉部。當通過所有exif可能性時,面部探測器最終拾起面部,但是一起進行不同的取向。

[1]:http://i.stack.imgur.com/D7bkZ.jpg

我怎樣才能解決這個問題?我的代碼中有錯誤嗎?

我遇到的另一個問題(但與面部檢測器緊密相連),當面部檢測器拾取面部時,但是對於「錯誤」方向(主要發生在前置相機上),最初使用的UIImage顯示正確一個uiiimageview,但是當我畫一個正方形疊加(我用的OpenCV在我的應用程序,所以我決定將UIImage轉換成CvMat中來繪製OpenCV的覆蓋)整個圖像旋轉90度(僅CvMat中的圖像,而不是UIImage我最初顯示)

我可以想到這裏的原因是面部檢測器與一些緩衝液(上下文搞亂?),所述的UIImage轉換到的OpenCV墊使用。我怎樣才能分開這些緩衝區?

轉換的UIImage到CvMat中的代碼是(從「著名的」 UIImage類別有人發):

-(cv::Mat)CVMat 
{ 

    CGColorSpaceRef colorSpace = CGImageGetColorSpace(self.CGImage); 
    CGFloat cols = self.size.width; 
    CGFloat rows = self.size.height; 

    cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels 

    CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data 
                cols, // Width of bitmap 
                rows, // Height of bitmap 
                8, // Bits per component 
                cvMat.step[0], // Bytes per row 
                colorSpace, // Colorspace 
                kCGImageAlphaNoneSkipLast | 
                kCGBitmapByteOrderDefault); // Bitmap info flags 

    CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), self.CGImage); 
    CGContextRelease(contextRef); 

    return cvMat; 
} 

- (id)initWithCVMat:(const cv::Mat&)cvMat 
{ 
    NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()]; 

    CGColorSpaceRef colorSpace; 

    if (cvMat.elemSize() == 1) 
    { 
     colorSpace = CGColorSpaceCreateDeviceGray(); 
    } 
    else 
    { 
     colorSpace = CGColorSpaceCreateDeviceRGB(); 
    } 

    CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data); 

    CGImageRef imageRef = CGImageCreate(cvMat.cols,          // Width 
              cvMat.rows,          // Height 
              8,            // Bits per component 
              8 * cvMat.elemSize(),       // Bits per pixel 
              cvMat.step[0],         // Bytes per row 
              colorSpace,          // Colorspace 
              kCGImageAlphaNone | kCGBitmapByteOrderDefault, // Bitmap info flags 
              provider,          // CGDataProviderRef 
              NULL,           // Decode 
              false,           // Should interpolate 
              kCGRenderingIntentDefault);      // Intent 

    self = [self initWithCGImage:imageRef]; 
    CGImageRelease(imageRef); 
    CGDataProviderRelease(provider); 
    CGColorSpaceRelease(colorSpace); 

    return self; 
} 

-(cv::Mat)CVRgbMat 
{ 
    cv::Mat tmpimage = self.CVMat; 
    cv::Mat image; 
    cvtColor(tmpimage, image, cv::COLOR_BGRA2BGR); 
    return image; 
} 

- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingImage:(UIImage *)img editingInfo:(NSDictionary *)editInfo { 
    self.prevImage = img; 
// self.previewView.image = img; 
    NSArray *arr = [[faceDetection sharedFaceDetector] detectFaces:img]; 
    for (id r in arr) 
    { 
     CGRect rect = RECTUNBOX(r); 
     //self.previewView.image = img; 
     self.previewView.image = [utils drawSquareOnImage:img square:rect]; 
    } 
    [self.imgPicker dismissModalViewControllerAnimated:YES]; 
    return; 
} 
+0

使用[此代碼](http://blog.logichigh.com/2008/ 06/05/uiimage-fix /)運行人臉檢測器之前,你將永遠不會有方向問題 –

回答

4

我不認爲這是一個好主意,旋轉一大堆圖像像素並匹配CIFaceFeature。你可以想象在旋轉的方向重繪非常重。我遇到了同樣的問題,我通過將CIFaceFeature的座標系相對於UIImageOrientation進行轉換來解決此問題。我用一些轉換方法擴展了CIFaceFeature類,以獲得關於UIImage及其UIImageView(或UIView的CALayer)的正確的點位置和邊界。完整的實現發佈在這裏:https://gist.github.com/laoyang/5747004。你可以直接使用。

這裏是從CIFaceFeature點最基本的轉換,返回CGPoint轉換基於圖像的方向:

- (CGPoint) pointForImage:(UIImage*) image fromPoint:(CGPoint) originalPoint { 

    CGFloat imageWidth = image.size.width; 
    CGFloat imageHeight = image.size.height; 

    CGPoint convertedPoint; 

    switch (image.imageOrientation) { 
     case UIImageOrientationUp: 
      convertedPoint.x = originalPoint.x; 
      convertedPoint.y = imageHeight - originalPoint.y; 
      break; 
     case UIImageOrientationDown: 
      convertedPoint.x = imageWidth - originalPoint.x; 
      convertedPoint.y = originalPoint.y; 
      break; 
     case UIImageOrientationLeft: 
      convertedPoint.x = imageWidth - originalPoint.y; 
      convertedPoint.y = imageHeight - originalPoint.x; 
      break; 
     case UIImageOrientationRight: 
      convertedPoint.x = originalPoint.y; 
      convertedPoint.y = originalPoint.x; 
      break; 
     case UIImageOrientationUpMirrored: 
      convertedPoint.x = imageWidth - originalPoint.x; 
      convertedPoint.y = imageHeight - originalPoint.y; 
      break; 
     case UIImageOrientationDownMirrored: 
      convertedPoint.x = originalPoint.x; 
      convertedPoint.y = originalPoint.y; 
      break; 
     case UIImageOrientationLeftMirrored: 
      convertedPoint.x = imageWidth - originalPoint.y; 
      convertedPoint.y = originalPoint.x; 
      break; 
     case UIImageOrientationRightMirrored: 
      convertedPoint.x = originalPoint.y; 
      convertedPoint.y = imageHeight - originalPoint.x; 
      break; 
     default: 
      break; 
    } 
    return convertedPoint; 
} 

,這裏是基於上述轉換的分類方法:

// Get converted features with respect to the imageOrientation property 
- (CGPoint) leftEyePositionForImage:(UIImage *)image; 
- (CGPoint) rightEyePositionForImage:(UIImage *)image; 
- (CGPoint) mouthPositionForImage:(UIImage *)image; 
- (CGRect) boundsForImage:(UIImage *)image; 

// Get normalized features (0-1) with respect to the imageOrientation property 
- (CGPoint) normalizedLeftEyePositionForImage:(UIImage *)image; 
- (CGPoint) normalizedRightEyePositionForImage:(UIImage *)image; 
- (CGPoint) normalizedMouthPositionForImage:(UIImage *)image; 
- (CGRect) normalizedBoundsForImage:(UIImage *)image; 

// Get feature location inside of a given UIView size with respect to the imageOrientation property 
- (CGPoint) leftEyePositionForImage:(UIImage *)image inView:(CGSize)viewSize; 
- (CGPoint) rightEyePositionForImage:(UIImage *)image inView:(CGSize)viewSize; 
- (CGPoint) mouthPositionForImage:(UIImage *)image inView:(CGSize)viewSize; 
- (CGRect) boundsForImage:(UIImage *)image inView:(CGSize)viewSize; 

(另一件需要注意的是在從UIImage方向提取臉部特徵時指定正確的EXIF方向。非常令人困惑......這是我所做的:

int exifOrientation; 
switch (self.image.imageOrientation) { 
    case UIImageOrientationUp: 
     exifOrientation = 1; 
     break; 
    case UIImageOrientationDown: 
     exifOrientation = 3; 
     break; 
    case UIImageOrientationLeft: 
     exifOrientation = 8; 
     break; 
    case UIImageOrientationRight: 
     exifOrientation = 6; 
     break; 
    case UIImageOrientationUpMirrored: 
     exifOrientation = 2; 
     break; 
    case UIImageOrientationDownMirrored: 
     exifOrientation = 4; 
     break; 
    case UIImageOrientationLeftMirrored: 
     exifOrientation = 5; 
     break; 
    case UIImageOrientationRightMirrored: 
     exifOrientation = 7; 
     break; 
    default: 
     break; 
} 

NSDictionary *detectorOptions = @{ CIDetectorAccuracy : CIDetectorAccuracyHigh }; 
CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:detectorOptions]; 

NSArray *features = [faceDetector featuresInImage:[CIImage imageWithCGImage:self.image.CGImage] 
              options:@{CIDetectorImageOrientation:[NSNumber numberWithInt:exifOrientation]}];