我遇到問題。我使用2圖像。一個是從互聯網下載。另一個被iPhone的相機拍攝。 我使用CIDetector來檢測2張圖像中的人臉。它從互聯網上下載圖像完美。但另一方面,它無法檢測或發現錯誤。無法檢測通過Iphone拍攝的圖像中的臉部
我檢查了很多圖像。結果是一樣的。
我遇到問題。我使用2圖像。一個是從互聯網下載。另一個被iPhone的相機拍攝。 我使用CIDetector來檢測2張圖像中的人臉。它從互聯網上下載圖像完美。但另一方面,它無法檢測或發現錯誤。無法檢測通過Iphone拍攝的圖像中的臉部
我檢查了很多圖像。結果是一樣的。
試試這個
NSDictionary *options = [NSDictionary dictionaryWithObject: CIDetectorAccuracyLow forKey: CIDetectorAccuracy];
CIDetector *detector = [CIDetector detectorOfType: CIDetectorTypeFace context: nil options: options];
CIImage *ciImage = [CIImage imageWithCGImage: [image CGImage]];
NSNumber *orientation = [NSNumber numberWithInt:[image imageOrientation]+1];
NSDictionary *fOptions = [NSDictionary dictionaryWithObject:orientation forKey: CIDetectorImageOrientation];
NSArray *features = [detector featuresInImage:ciImage options:fOptions];
for (CIFaceFeature *f in features) {
NSLog(@"left eye found: %@", (f. hasLeftEyePosition ? @"YES" : @"NO"));
NSLog(@"right eye found: %@", (f. hasRightEyePosition ? @"YES" : @"NO"));
NSLog(@"mouth found: %@", (f. hasMouthPosition ? @"YES" : @"NO"));
if(f.hasLeftEyePosition)
NSLog(@"left eye position x = %f , y = %f", f.leftEyePosition.x, f.leftEyePosition.y);
if(f.hasRightEyePosition)
NSLog(@"right eye position x = %f , y = %f", f.rightEyePosition.x, f.rightEyePosition.y);
if(f.hasMouthPosition)
NSLog(@"mouth position x = %f , y = %f", f.mouthPosition.x, f.mouthPosition.y);
}
如果您使用的是前置攝像頭總是在肖像添加此
NSDictionary* imageOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:6] forKey:CIDetectorImageOrientation];
NSArray* features = [detector featuresInImage:image options:imageOptions];
欲瞭解更多信息
樣本:https://github.com/beetlebugorg/PictureMe
Face Detection issue using CIDetector
https://stackoverflow.com/questions/4332868/detect-face-in-iphone?rq=1
我嘗試上面這段代碼。它可以檢測Iphone捕獲的圖像。但它無法檢測到互聯網上的圖像下載。這是我的代碼
NSDictionary *options = [NSDictionary dictionaryWithObject: CIDetectorAccuracyLow forKey: CIDetectorAccuracy];
CIDetector *detector = [CIDetector detectorOfType: CIDetectorTypeFace context: nil options: options];
CIImage *ciImage = [CIImage imageWithCGImage: [facePicture CGImage]];
NSDictionary* imageOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:6] forKey:CIDetectorImageOrientation];
NSArray *features = [detector featuresInImage:ciImage options:imageOptions];
而當它檢測到臉。我通過展示代碼
for (CIFaceFeature *feature in features) {
// //設置紅色特徵顏色
CGRect faceRect = [feature bounds];
CGContextSetRGBFillColor(context, 0.0f, 0.0f, 0.0f, 0.5f);
CGContextSetStrokeColorWithColor(context, [UIColor whiteColor].CGColor);
CGContextSetLineWidth(context, 2.0f * scale);
CGContextAddRect(context, feature.bounds);
CGContextDrawPath(context, kCGPathFillStroke);
CGContextDrawImage(context, faceRect, [imgDraw CGImage]);
這不是正確的位置。它向右移動。
我有同樣的問題。您可以在檢測之前更改圖像的大小。
CGSize size = CGSizeMake(cameraCaptureImage.size.width, cameraCaptureImage.size.height);
UIGraphicsBeginImageContext(size);
[cameraCaptureImage drawInRect:CGRectMake(0, 0, size.width, size.height)];
cameraCaptureImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
我使用該代碼。但結果是一樣的..... :( –
大問題是它無法檢測當我使用的圖像,只是拍攝的iPhone的相機 –
但如果我設置** NSDictionary * imageOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:6] forKey:CIDetectorImageOrientation]; **在互聯網下載的圖像無法檢測,如果它是風景......... :( –