2012-12-06 137 views
1

用於檢測相機中的臉部我正在嘗試SquareCam(iOS開發人員庫)。但我無法顯示左眼,右眼和嘴巴的位置。我正在使用下面的代碼如何在iPhone中檢測並顯示相機中的眼睛和嘴巴?

NSArray *sublayers = [NSArray arrayWithArray:[previewLayer sublayers]]; 
NSInteger sublayersCount = [sublayers count], currentSublayer = 0; 
NSInteger featuresCount = [features count], currentFeature = 0; 

[CATransaction begin]; 
[CATransaction setValue:(id)kCFBooleanTrue forKey:kCATransactionDisableActions]; 

// hide all the face layers 
for (CALayer *layer in sublayers) 
{ 
    if ([[layer name] isEqualToString:@"FaceView"]) 
     [layer setHidden:YES]; 
} 

if (featuresCount == 0 || !detectFaces) { 
    [CATransaction commit]; 
    return; // early bail. 
} 

CGSize parentFrameSize = [previewView frame].size; 
NSString *gravity = [previewLayer videoGravity]; 
BOOL isMirrored = [previewLayer isMirrored]; 
CGRect previewBox = [SquareCamViewController videoPreviewBoxForGravity:gravity 
                  frameSize:parentFrameSize 
                  apertureSize:clap.size]; 

for (CIFaceFeature *ff in features) { 


    // find the correct position for the square layer within the previewLayer 
    // the feature box originates in the bottom left of the video frame. 
    // (Bottom right if mirroring is turned on) 
    CGRect faceRect = [ff bounds]; 

    CGRect leftEyeFrameRect; 

    CGFloat temp    = faceRect.size.width; 
    faceRect.size.width  = faceRect.size.height; 
    faceRect.size.height  = temp; 
    temp      = faceRect.origin.x; 
    faceRect.origin.x  = faceRect.origin.y; 
    faceRect.origin.y  = temp; 

    // scale coordinates so they fit in the preview box, which may be scaled 
    CGFloat widthScaleBy  = previewBox.size.width/clap.size.height; 
    CGFloat heightScaleBy = previewBox.size.height/clap.size.width; 
    faceRect.size.width  *= widthScaleBy; 
    faceRect.size.height *= heightScaleBy; 
    faceRect.origin.x  *= widthScaleBy; 
    faceRect.origin.y  *= heightScaleBy; 


    if (isMirrored) 
    { 

     faceRect = CGRectOffset(faceRect, previewBox.origin.x + previewBox.size.width - faceRect.size.width - (faceRect.origin.x * 2), previewBox.origin.y); 


    } 
    else 
    { 

     faceRect = CGRectOffset(faceRect, previewBox.origin.x, previewBox.origin.y); 
      leftEyeFrameRect=CGRectOffset(faceRect,ff.leftEyePosition.x, ff.leftEyePosition.y); 

    } 

    CALayer *featureLayer = nil; 
    CALayer *eyeLayer  = nil; 

    // re-use an existing layer if possible 
    while (!featureLayer && (currentSublayer < sublayersCount)) 

    { 
     CALayer *currentLayer = [sublayers objectAtIndex:currentSublayer++]; 
     if ([[currentLayer name] isEqualToString:@"FaceLayer"]) { 
      featureLayer = currentLayer; 

      [currentLayer setHidden:NO]; 
     } 
    } 



    // create a new one if necessary 
    if (!featureLayer) { 

     featureLayer = [CALayer new]; 
     [featureLayer setContents:(id)[square CGImage]]; 
     [featureLayer setName:@"FaceLayer"]; 
     [previewLayer addSublayer:featureLayer]; 
     [featureLayer release]; 

    } 
    [featureLayer setFrame:faceRect]; 


    if (faceView !=nil) { 
     [faceView removeFromSuperview]; 
     [faceView release]; 
    } 
    if (leftEyeView != nil) { 
     [leftEyeView removeFromSuperview]; 
     [leftEyeView release]; 
    } 

    faceView = [[UIView alloc] initWithFrame:CGRectMake(faceRect.origin.x, faceRect.origin.y ,faceRect.size.width, faceRect.size.height)]; 

    faceView.layer.borderWidth = 1; 
    faceView.layer.borderColor = [[UIColor redColor] CGColor]; 
    [self.view addSubview:faceView]; 


    leftEyeView = [[UIView alloc] initWithFrame:CGRectMake(faceView.frame.origin.x+(faceView.frame.size.height/2), faceView.frame.origin.y+(faceView.frame.size.height*0.10) ,faceView.frame.size.width*0.40, faceView.frame.size.height*0.40)]; 

    UIImageView *leftEyeImageView=[[UIImageView alloc] initWithImage:[UIImage imageNamed:@"eye.png"]]; 
    leftEyeImageView.frame = CGRectMake(0, 0, faceView.frame.size.width*0.40, faceView.frame.size.height*0.40); 
    [leftEyeView addSubview:leftEyeImageView]; 
    [self.view addSubview:leftEyeView]; 



    if (ff.hasLeftEyePosition) { 
     CGPoint leftEyeCenter= ff.leftEyePosition; 
     UIView *vv= [[UIView alloc] initWithFrame:CGRectMake(leftEyeCenter.x, leftEyeCenter.y, 50, 50)]; 
     vv.center = leftEyeCenter; 
     vv.layer.borderWidth= 4.0; 
     vv.layer.borderColor= [[UIColor blackColor]CGColor]; 
     [self.view addSubview:vv]; 
    } 

它是檢測眼睛,但沒有顯示在正確的位置。任何人都可以幫助解決此問題。 在此先感謝。

+0

也許這會幫助你.. http://maniacdev.com/2011/11/tutorial-easy-face-detection-with-core-image-in-ios-5/ – TonyMkenu

+0

「沒有顯示在正確的位置」 - 請更具體,究竟是什麼錯誤? – jimpic

+0

根據眼睛在臉上的位置沒有顯示眼睛的位置。 – deepti

回答

0

這可能是由您的輸入,檢測器和輸出之間的不正確方位引起的。如果檢測到面部,則可能只有輸出座標需要從橫向轉換爲縱向,反之亦然。否則,看看here

-1

你可以調查哈爾級聯培訓文件和OPENGL,但這是一個完全不同的approuch。它確實支持低於6.0的iOS版本,這是一個專業版。但是,Square的Apple(蘋果)的樣品比較難。

這個OpenGL能夠檢測耳朵,眼睛等。網上已經有一些訓練文件。

+0

另一個應用是在臉上做一些數學運算(如果考慮到你的'facerect'的高度和寬度,眼睛的位置(或多或少)可以預先定義) – BarryK88

1

我在使用前置攝像頭時遇到了與預覽中相同的問題,因爲預覽是鏡像的,我找不到任何可以縮放的好信息。

下面的代碼是我得到的最接近的代碼。請注意,我將圖像定義爲heartImage,我假設您使用的是名爲SquareCam的Apple樣本。

在該方法中- (void)drawFaceBoxesForFeatures:(NSArray *)features forVideoBox:(CGRect)clap orientation:(UIDeviceOrientation)orientation

 if(ff.hasLeftEyePosition) 
    { 
     //swap coordinates 
     CGFloat leftEyeRectOriginX = ff.leftEyePosition.y ; 
     CGFloat leftEyeRectOriginY = ff.leftEyePosition.x ; 
     CGFloat leftEyeRectWidth = faceRect.size.width*0.3; 
     CGFloat leftEyeRectHeight = faceRect.size.width*0.3; 

     //adjust scale 
     leftEyeRectOriginX *= widthScaleBy; 
     leftEyeRectOriginY *= heightScaleBy; 

     NSLog(@"LeftEyePosition: %@", NSStringFromCGPoint(ff.leftEyePosition)); 
     CGRect r = CGRectMake(leftEyeRectOriginX - (leftEyeRectWidth/2) , leftEyeRectOriginY - (leftEyeRectHeight/2), leftEyeRectWidth, leftEyeRectHeight); 

     if (isMirrored){ 
      r = CGRectOffset(r, previewBox.origin.x + previewBox.size.width - (rightEyeRectOriginX*2) - rightEyeRectWidth+ faceRect.origin.x, previewBox.origin.y); 
      NSLog(@"LeftEyeRect mirrored: %@", NSStringFromCGRect(r)); 
     } 
     else{ 
      r = CGRectOffset(r, previewBox.origin.x, previewBox.origin.y); 
     } 


     while (!leftEyeEyeLayer && (currentSublayer < sublayersCount)) { 
      CALayer *currentLayer = [sublayers objectAtIndex:currentSublayer++]; 
      if ([[currentLayer name] isEqualToString:@"LeftEyeLayer"]) { 
       leftEyeEyeLayer = currentLayer; 
       [currentLayer setHidden:NO]; 
      } 
     } 

     // create a new one if necessary 
     if (!leftEyeEyeLayer) { 
      leftEyeEyeLayer = [CALayer new]; 
      [leftEyeEyeLayer setContents:(id)[heartImage CGImage]]; 
      [leftEyeEyeLayer setName:@"LeftEyeLayer"]; 
      [previewLayer addSublayer:leftEyeEyeLayer]; 
      [leftEyeEyeLayer release]; 
     } 
     [leftEyeEyeLayer setFrame:r]; 

    } 

這同樣適用於右眼,與我的情況下,它反映使用此異常:r = CGRectOffset(r, previewBox.origin.x + previewBox.size.width - (rightEyeRectOriginX*2) - rightEyeRectWidth+ faceRect.origin.x, previewBox.origin.y); 。 與樣品代碼唯一的區別是,你首先要刪除所有featureLayers,所以我上面的一些代碼行看起來是這樣的:

// hide all the face layers 
for (CALayer *layer in sublayers) { 
    if ([[layer name] isEqualToString:@"FaceLayer"] || [[layer name] isEqualToString:@"LeftEyeLayer"] || [[layer name] isEqualToString:@"RightEyeLayer"]) 
     [layer setHidden:YES]; 
} 

準確地說,我的困難只有在直播相機預覽。當使用方法保存在庫(- (CGImageRef)newSquareOverlayedImageForFeatures:(NSArray *)features inCGImage:(CGImageRef)backgroundImage withOrientation:(UIDeviceOrientation)orientation frontFacing:(BOOL)isFrontFacing )圖片它通過使用正常工作:

 if(ff.hasLeftEyePosition) 
    { 
     CGRect r = CGRectMake(ff.leftEyePosition.x-faceWidth*0.15, ff.leftEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3); 
     CGContextDrawImage(bitmapContext, r, [rotatedHeartImage CGImage]); 

    } 

請讓我知道如果和我應該如何提高我的答案。

相關問題