2013-02-24 65 views
1

我需要從給定圖像中裁剪出一個臉部/多個臉部,並使用裁剪後的臉部圖像用於其他用途。我使用CoreImage中的CIDetectorTypeFace。問題是新的UIImage只包含檢測到的臉部,因爲頭髮被切斷或下顎被切斷,所以需要更大的尺寸。如何增加initWithFrame:faceFeature.bounds的尺寸?我使用 示例代碼:使用CoreImage裁剪臉部

CIImage* image = [CIImage imageWithCGImage:staticBG.image.CGImage]; 
    CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace 
              context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]]; 
    NSArray* features = [detector featuresInImage:image]; 

    for(CIFaceFeature* faceFeature in features) 
    { 
     UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds]; 
     faceView.layer.borderWidth = 1; 
     faceView.layer.borderColor = [[UIColor redColor] CGColor]; 
     [staticBG addSubview:faceView]; 

     // cropping the face 
     CGImageRef imageRef = CGImageCreateWithImageInRect([staticBG.image CGImage], faceFeature.bounds); 
     [resultView setImage:[UIImage imageWithCGImage:imageRef]]; 
     CGImageRelease(imageRef); 
    } 

注:紅色的框,我提出以顯示檢測到的人臉區域不,根本不匹配與裁剪出的圖像。也許我沒有展示正確的框架,但由於我不需要展示框架,所以我真的需要裁剪出來的臉,我並不擔心它。

回答

4

不知道,但你可以嘗試

CGRect biggerRectangle = CGRectInset(faceFeature.bounds, someNegativeCGFloatToIncreaseSizeForXAxis, someNegativeCGFloatToIncreaseSizeForYAxis); 
CGImageRef imageRef = CGImageCreateWithImageInRect([staticBG.image CGImage], biggerRectangle); 

https://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CGGeometry/Reference/reference.html#//apple_ref/c/func/CGRectInset

+0

非常感謝......它的工作!我也使用CGRectOffset使它更精確一些。 :) – Soumyajit 2013-02-24 16:23:19

+0

不客氣!如果答案解決了您的問題,請考慮將其標記爲已接受(左側的複選標記)。 – blub 2013-02-24 16:49:16