0

我有一張照片拍攝應用程序。當用戶按下按鈕拍攝照片時,我設置了一個名爲self.hasUserTakenAPhoto的全局NSString變量,等於YES。當使用後置攝像頭時,這完全可以100%的工作。但是,使用前置攝像頭的時間只有50%左右,我不知道爲什麼。全球變量的更新速度不夠快50%的時間

以下是代碼的重要部分,並簡要說明了它們的作用。

這裏是我的viewDidLoad:

- (void)viewDidLoad 
{ 

    [super viewDidLoad]; 
    // Do any additional setup after loading the view. 


    self.topHalfView.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height/2); 

    self.takingPhotoView.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height); 

    self.afterPhotoView.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height); 


    self.bottomHalfView.frame = CGRectMake(0, 240, self.view.bounds.size.width, self.view.bounds.size.height/2); 



    PFFile *imageFile = [self.message objectForKey:@"file"]; 

    NSURL *imageFileURL = [[NSURL alloc]initWithString:imageFile.url]; 

    imageFile = nil; 



    self.imageData = [NSData dataWithContentsOfURL:imageFileURL]; 

    imageFileURL = nil; 

    self.topHalfView.image = [UIImage imageWithData:self.imageData]; 

    //START CREATING THE SESSION 

    self.session =[[AVCaptureSession alloc]init]; 


    [self.session setSessionPreset:AVCaptureSessionPresetPhoto]; 



    self.inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; 


    NSError *error; 


    self.deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:self.inputDevice error:&error]; 



    if([self.session canAddInput:self.deviceInput]) 
     [self.session addInput:self.deviceInput]; 


    _previewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:_session]; 




    self.rootLayer = [[self view]layer]; 


    [self.rootLayer setMasksToBounds:YES]; 


    [_previewLayer setFrame:CGRectMake(0, 240, self.rootLayer.bounds.size.width, self.rootLayer.bounds.size.height/2)]; 


    [_previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill]; 


    [self.rootLayer insertSublayer:_previewLayer atIndex:0]; 




    self.videoOutput = [[AVCaptureVideoDataOutput alloc] init]; 
    self.videoOutput.videoSettings = @{ (NSString *)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA) }; 

    [self.session addOutput:self.videoOutput]; 

    dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL); 

    [self.videoOutput setSampleBufferDelegate:self queue:queue]; 



    [_session startRunning]; 

} 

viewDidLoad開始的重要組成部分,在那裏我留下了//START CREATING THE SESSION

評論我基本上創建會話,然後開始運行它。我將這個視圖控制器設置爲AVCaptureVideoDataOutputSampleBufferDelegate,所以一旦會話開始運行,下面的方法也開始被調用。

- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
     fromConnection:(AVCaptureConnection *)connection 
{ 

    //Sample buffer data is being sent, but don't actually use it until self.hasUserTakenAPhoto has been set to YES. 


    NSLog(@"Has the user taken a photo?: %@", self.hasUserTakenAPhoto); 

    if([self.hasUserTakenAPhoto isEqualToString:@"YES"]) { 

    //Now that self.hasUserTakenAPhoto is equal to YES, grab the current sample buffer and use it for the value of self.image aka the captured photo. 

     self.image = [self imageFromSampleBuffer:sampleBuffer]; 

    } 

} 

此代碼是接收來自攝像頭每秒的視頻輸出,但我實際上並不用它做任何事情,直到self.hasUserTakenAPhoto等於YES。一旦有YES一個字符串值,然後我使用當前sampleBuffer從相機,並把它稱爲我self.image

全局變量裏面所以,這裏是當self.hasUserTakenAPhoto實際上是設置爲YES

下面是我的IBAction代碼,當用戶按下按鈕捕捉照片時被調用。 self.hasUserTakenAPhoto = @"YES";

-(IBAction)stillImageCapture { 


    self.hasUserTakenAPhoto = @"YES"; 

    [self.session stopRunning]; 


     if(self.inputDevice.position == 2) { 

     self.image = [self selfieCorrection:self.image]; 


    } else { 


     self.image = [self rotate:UIImageOrientationRight]; 


    } 



    CGFloat widthToHeightRatio = _previewLayer.bounds.size.width/_previewLayer.bounds.size.height; 

    CGRect cropRect; 
    // Set the crop rect's smaller dimension to match the image's smaller dimension, and 
    // scale its other dimension according to the width:height ratio. 
    if (self.image.size.width < self.image.size.height) { 
     cropRect.size.width = self.image.size.width; 
     cropRect.size.height = cropRect.size.width/widthToHeightRatio; 
    } else { 
     cropRect.size.width = self.image.size.height * widthToHeightRatio; 
     cropRect.size.height = self.image.size.height; 
    } 

    // Center the rect in the longer dimension 
    if (cropRect.size.width < cropRect.size.height) { 
     cropRect.origin.x = 0; 
     cropRect.origin.y = (self.image.size.height - cropRect.size.height)/2.0; 

     NSLog(@"Y Math: %f", (self.image.size.height - cropRect.size.height)); 


    } else { 
     cropRect.origin.x = (self.image.size.width - cropRect.size.width)/2.0; 
     cropRect.origin.y = 0; 



     float cropValueDoubled = self.image.size.height - cropRect.size.height; 


     float final = cropValueDoubled/2; 


     finalXValueForCrop = final; 


    } 



    CGRect cropRectFinal = CGRectMake(cropRect.origin.x, finalXValueForCrop, cropRect.size.width, cropRect.size.height); 

    CGImageRef imageRef = CGImageCreateWithImageInRect([self.image CGImage], cropRectFinal); 

    UIImage *image2 = [[UIImage alloc]initWithCGImage:imageRef]; 


    self.image = image2; 


    CGImageRelease(imageRef); 



    self.bottomHalfView.image = self.image; 


    if ([self.hasUserTakenAPhoto isEqual:@"YES"]) { 


     [self.takingPhotoView setHidden:YES]; 


     self.image = [self screenshot]; 


     [_afterPhotoView setHidden:NO]; 


    } 

} 

所以基本上viewDidLoad方法運行,並在會話開始,會話發送所有的攝像頭看到的:當這個代碼運行,但實際上所有重要的是的第一個語句中的很多情況方法,然後只要用戶按下「拍攝照片」按鈕,我們將字符串值self.hasUserTakenAPhoto設置爲YES,會話停止,並且由於self.hasUserTakenAPhoto現在等於YES,因此captureOutput方法將最後一個相機緩衝區進入self.image對象供我使用。

我只是無法弄清楚這一點,因爲就像我說的那樣,當使用後置相機時它可以100%的工作。但是,使用前置攝像頭時,只能使用50%的時間。

我已經縮小問題到一個事實,即使用前置攝像頭的外前時self.hasUserTakenAPhoto不會更新YES速度不夠快,我知道,因爲如果你在我的第二個代碼看看我張貼有NSLog(@"Has the user taken a photo?: %@", self.hasUserTakenAPhoto);聲明。

當這項工作正常並且用戶剛剛按下按鈕以捕獲照片時,也會停止會話,那麼最後一次運行NSLog(@"Has the user taken a photo?: %@", self.hasUserTakenAPhoto);時,它將以正確值YES打印。

但是,如果它無法正常工作,並且更新速度不夠快,最後一次運行它仍然會打印到日誌,其值爲null

關於爲什麼self.hasUserTakenAPhoto更新速度不夠快使用前置攝像頭時有50%的時間有任何想法?即使我們無法弄清楚,也沒有關係。我只需要幫助,然後提出一個替代解決方案。

感謝您的幫助。

回答

0

我認爲它是一個調度問題。在你的方法返回點

– captureOutput:didOutputSampleBuffer:fromConnection: 
– captureOutput:didDropSampleBuffer:fromConnection: 

添加CFRunLoopRun()

+0

感謝您的幫助。我的回撥是什麼? – user3344977

+0

我剛剛更新了答案,列出了回調方法。 –

+0

好的,如果我理解正確,我爲共享的第二個代碼捕獲輸出的代碼,我會把CFRunLoopRun();在最後2個關閉花括號之間是否正確? – user3344977