2013-06-24 73 views
6

我正在修改Apple SquareCam示例面部檢測應用程序,以便在寫入相機卷之前修剪面部,而不是繪製圍繞面部的紅色正方形。我使用相同的CGRect進行裁剪,與用於繪製紅色方塊的裁剪一樣。但是,行爲是不同的。在肖像模式下,如果臉部位於屏幕的水平中心,則會按預期裁剪臉部(與紅色方塊相同的位置)。如果臉部向左或向右偏離,則裁剪看起來總是從屏幕中間取出,而不是紅色方塊所在的位置。作物面對面部檢測

這裏是蘋果的原始代碼:

- (CGImageRef)newSquareOverlayedImageForFeatures:(NSArray *)features 
              inCGImage:(CGImageRef)backgroundImage 
             withOrientation:(UIDeviceOrientation)orientation 
              frontFacing:(BOOL)isFrontFacing 
{ 
    CGImageRef returnImage = NULL; 
    CGRect backgroundImageRect = CGRectMake(0., 0., CGImageGetWidth(backgroundImage), CGImageGetHeight(backgroundImage)); 
    CGContextRef bitmapContext = CreateCGBitmapContextForSize(backgroundImageRect.size); 
    CGContextClearRect(bitmapContext, backgroundImageRect); 
    CGContextDrawImage(bitmapContext, backgroundImageRect, backgroundImage); 
    CGFloat rotationDegrees = 0.; 

    switch (orientation) { 
     case UIDeviceOrientationPortrait: 
      rotationDegrees = -90.; 
      break; 
     case UIDeviceOrientationPortraitUpsideDown: 
      rotationDegrees = 90.; 
      break; 
     case UIDeviceOrientationLandscapeLeft: 
      if (isFrontFacing) rotationDegrees = 180.; 
      else rotationDegrees = 0.; 
      break; 
     case UIDeviceOrientationLandscapeRight: 
      if (isFrontFacing) rotationDegrees = 0.; 
      else rotationDegrees = 180.; 
      break; 
     case UIDeviceOrientationFaceUp: 
     case UIDeviceOrientationFaceDown: 
     default: 
      break; // leave the layer in its last known orientation 
    } 
    UIImage *rotatedSquareImage = [square imageRotatedByDegrees:rotationDegrees]; 

    // features found by the face detector 
    for (CIFaceFeature *ff in features) { 
     CGRect faceRect = [ff bounds]; 
     NSLog(@"faceRect=%@", NSStringFromCGRect(faceRect)); 
     CGContextDrawImage(bitmapContext, faceRect, [rotatedSquareImage CGImage]); 
    } 
    returnImage = CGBitmapContextCreateImage(bitmapContext); 
    CGContextRelease (bitmapContext); 

    return returnImage; 
} 

和我的更換:

- (CGImageRef)newSquareOverlayedImageForFeatures:(NSArray *)features 
              inCGImage:(CGImageRef)backgroundImage 
             withOrientation:(UIDeviceOrientation)orientation 
              frontFacing:(BOOL)isFrontFacing 
{ 
    CGImageRef returnImage = NULL; 

    //I'm only taking pics with one face. This is just for testing 
    for (CIFaceFeature *ff in features) { 
     CGRect faceRect = [ff bounds]; 
     returnImage = CGImageCreateWithImageInRect(backgroundImage, faceRect); 
    } 

    return returnImage; 
} 

更新*

基於Wains輸入,我試圖讓我的代碼更像原來的,但結果是相同:

- (NSArray*)extractFaceImages:(NSArray *)features 
       fromCGImage:(CGImageRef)sourceImage 
      withOrientation:(UIDeviceOrientation)orientation 
       frontFacing:(BOOL)isFrontFacing 
{ 
NSMutableArray *faceImages = [[[NSMutableArray alloc] initWithCapacity:1] autorelease]; 


CGImageRef returnImage = NULL; 
CGRect backgroundImageRect = CGRectMake(0., 0., CGImageGetWidth(sourceImage), CGImageGetHeight(sourceImage)); 
CGContextRef bitmapContext = CreateCGBitmapContextForSize(backgroundImageRect.size); 
CGContextClearRect(bitmapContext, backgroundImageRect); 
CGContextDrawImage(bitmapContext, backgroundImageRect, sourceImage); 
CGFloat rotationDegrees = 0.; 

switch (orientation) { 
    case UIDeviceOrientationPortrait: 
     rotationDegrees = -90.; 
     break; 
    case UIDeviceOrientationPortraitUpsideDown: 
     rotationDegrees = 90.; 
     break; 
    case UIDeviceOrientationLandscapeLeft: 
     if (isFrontFacing) rotationDegrees = 180.; 
     else rotationDegrees = 0.; 
     break; 
    case UIDeviceOrientationLandscapeRight: 
     if (isFrontFacing) rotationDegrees = 0.; 
     else rotationDegrees = 180.; 
     break; 
    case UIDeviceOrientationFaceUp: 
    case UIDeviceOrientationFaceDown: 
    default: 
     break; // leave the layer in its last known orientation 
} 

// features found by the face detector 
for (CIFaceFeature *ff in features) { 
    CGRect faceRect = [ff bounds]; 

    NSLog(@"faceRect=%@", NSStringFromCGRect(faceRect)); 

    returnImage = CGBitmapContextCreateImage(bitmapContext); 
    returnImage = CGImageCreateWithImageInRect(returnImage, faceRect); 
    UIImage *clippedFace = [UIImage imageWithCGImage:returnImage]; 
    [faceImages addObject:clippedFace]; 
} 

CGContextRelease (bitmapContext); 

return faceImages; 

}

我花了三年的照片和記錄faceRect與這些結果;

拍攝的照片位於靠近設備左邊緣的位置。拍攝圖像完全錯過了正面: faceRect = {{972,43.0312},{673.312,673.312}}

拍攝時臉部位於設備中間。拍攝的圖像是好的: faceRect = {{1060.59,536.625},{668.25,668.25}}

圖片位於靠近設備右邊緣的位置拍攝。捕獲圖像完全錯過了面朝左: faceRect = {{982.125,999.844},{804.938,804.938}}

因此,看起來「x」和「y」是相反的。我用縱向拿着設備,但faceRect似乎是基於風景的。但是,我無法弄清楚蘋果原始代碼的哪一部分正在解決這個問題。該方法中的方向代碼似乎隻影響紅色方形疊加圖像本身。

+0

你嘗試我的回答,但仍然看到問題是什麼? – Wain

回答

3

你應該保持原代碼的所有,只是添加一行返回(用一個調整把圖像生成循環內,你只裁剪第一面)前:

returnImage = CGImageCreateWithImageInRect(returnImage, faceRect); 

這允許圖像以正確的方向呈現,這意味着面部矩形將處於正確的位置。

+0

原始代碼中的方向邏輯只是旋轉紅色方塊圖像(squarePNG.png),因爲它在圖像中烘焙了「頂部」和「底部」兩個字。我不認爲這很重要。無論如何,我試圖更像原始代碼,但結果是一樣的。它只適用於臉部直接位於設備的(水平)中間位置。 – ax123man

+0

應將旋轉應用於來自相機的圖像。旋轉後,源圖像是'固定',然後你可以應用面部矩形。 – Wain

0

您正面臨着這個問題,因爲當圖像保存時,它會被垂直保存翻轉。而且faceRect的位置與臉部不完全一致。您可以通過修改faceRect的位置來解決此問題,使其在returnImage內垂直翻轉。

for (CIFaceFeature *ff in features) { 
     faceRect = [ff bounds]; 
     CGRect modifiedRect = CGRectFlipVertical(faceRect,CGRectMake(0,0,CGImageGetWidth(returnImage),CGImageGetHeight(returnImage))); 
     returnImage = CGImageCreateWithImageInRect(returnImage, modifiedRect); 
     UIImage *clippedFace = [UIImage imageWithCGImage:returnImage]; 
     [faceImages addObject:clippedFace]; 
    } 

CGRectFlipVertical(CGRect innerRect, CGRect outerRect)可以像這樣來定義,

CGRect CGRectFlipVertical(CGRect innerRect, CGRect outerRect) 
    { 
     CGRect rect = innerRect; 
     rect.origin.y = outerRect.origin.y + outerRect.size.height - (rect.origin.y + rect.size.height); 
     return rect; 
    }