2013-10-14 27 views
0

我目前正在研究一個涉及AVCaptureVideoDataOutputSampleBufferDelegate的項目以進行眨眼檢測。使用AVCaptureVideoDataOutputSampleBufferDelegate方法執行dispatch_async塊的延遲

我在委託方法

(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{ 

//Initialisation of buffer and UIImage and CIDetector, etc. 

    dispatch_async(dispatch_get_main_queue(), ^(void) { 
     if(features.count > 0){ 
      CIFaceFeature *feature = [features objectAtIndex:0]; 
      if([feature leftEyeClosed]&&[feature rightEyeClosed]){ 
       flag = TRUE; 
      }else{ 
       if(flag){ 
        blinkcount++; 
        //Update UILabel containing blink count. The count variable is incremented from here. 
       } 
      flag = FALSE; 
      } 
    } 
} 

上面示出的方法被稱爲連續並處理來自相機的視頻饋入以下dispatch_async塊。 flag布爾值跟蹤在最後一幀中眼睛是關閉還是打開,以便可以檢測到閃爍。有相當數量的幀被丟棄,但仍然可以正確檢測到閃爍,所以我猜想處理的fps是足夠的。

我的問題是UILabel在執行眨眼後大幅延遲(〜1秒)後得到更新。這使得該應用程序看起來很遲鈍並且不直觀。我試圖在沒有調度的情況下編寫UI更新代碼,但這是不行的。有什麼我可以做的,以便UILabel瞬時更新閃爍後執行?

回答

1

很難確切地知道是怎麼回事沒有更多的代碼,但派遣上面的代碼,你說:

//Initialisation of buffer and UIImage and CIDetector, etc. 

如果你每次都真正初始化檢測,這可能是次優的 - 使其壽命長久。我不知道肯定初始化一個CIDetector是昂貴的,但它是一個開始的地方。另外如果你真的在這裏使用UIImage,那也是不理想的。不要通過的UIImage去,採取更直接的途徑:

CVImageBufferRef ib = CMSampleBufferGetImageBuffer(sampleBuffer); 
CIImage* ciImage = [CIImage imageWithCVPixelBuffer: ib]; 
NSArray* features = [longLivedDetector featuresInImage: ciImage]; 

最後,做在後臺線程特徵檢測,只有當元帥的更新的UILabel回主線程。就像這樣:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { 
    if (!_longLivedDetector) { 
     _longLivedDetector = [CIDetector detectorOfType:CIDetectorTypeFace context: ciContext options: whatever]; 
    } 

    CVImageBufferRef ib = CMSampleBufferGetImageBuffer(sampleBuffer); 
    CIImage* ciImage = [CIImage imageWithCVPixelBuffer: ib]; 
    NSArray* features = [_longLivedDetector featuresInImage: ciImage]; 
    if (!features.count) 
     return; 

    CIFaceFeature *feature = [features objectAtIndex:0]; 
    const BOOL leftAndRightClosed = [feature leftEyeClosed] && [feature rightEyeClosed]; 

    // Only trivial work is left to do on the main thread. 
    dispatch_async(dispatch_get_main_queue(), ^(void){ 
     if (leftAndRightClosed) { 
      flag = TRUE; 
     } else { 
      if (flag) { 
       blinkcount++; 
       //Update UILabel containing blink count. The count variable is incremented from here. 
      } 
      flag = FALSE; 
     } 
    }); 
} 

最後,你還應該記住,面部特徵檢測是一個不平凡的信號處理任務,它需要顯著計算(即時間)來完成。如果沒有硬件更快的速度,沒有辦法讓速度更快,我預計會有一個問題出現。