2012-02-11 60 views
1

我試圖處理本地視頻文件,並簡單地做對像素數據一些分析。沒有輸出。我目前的代碼遍歷視頻的每一幀,但我實際上一次只想跳過〜15幀,以加快速度。有沒有辦法在不解碼的情況下跳過幀?跳過幀,而視頻處理在iOS上

在ffmpeg的,我可以簡單地叫av_read_frame不調用avcodec_decode_video2。

在此先感謝!這裏是我當前的代碼:

- (void) readMovie:(NSURL *)url 
{ 

    [self performSelectorOnMainThread:@selector(updateInfo:) withObject:@"scanning" waitUntilDone:YES]; 

    startTime = [NSDate date]; 

    AVURLAsset * asset = [AVURLAsset URLAssetWithURL:url options:nil]; 

    [asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:@"tracks"] completionHandler: 
    ^{ 
     dispatch_async(dispatch_get_main_queue(), 
         ^{ 



          AVAssetTrack * videoTrack = nil; 
          NSArray * tracks = [asset tracksWithMediaType:AVMediaTypeVideo]; 
          if ([tracks count] == 1) 
          { 
           videoTrack = [tracks objectAtIndex:0]; 

           videoDuration = CMTimeGetSeconds([videoTrack timeRange].duration); 

           NSError * error = nil; 

           // _movieReader is a member variable 
           _movieReader = [[AVAssetReader alloc] initWithAsset:asset error:&error]; 
           if (error) 
            NSLog(@"%@", error.localizedDescription);  

           NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; 
           NSNumber* value = [NSNumber numberWithUnsignedInt: kCVPixelFormatType_420YpCbCr8Planar]; 

           NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; 

           AVAssetReaderTrackOutput* output = [AVAssetReaderTrackOutput 
                 assetReaderTrackOutputWithTrack:videoTrack 
                 outputSettings:videoSettings]; 
           output.alwaysCopiesSampleData = NO; 

           [_movieReader addOutput:output]; 

           if ([_movieReader startReading]) 
           { 
            NSLog(@"reading started"); 

            [self readNextMovieFrame]; 
           } 
           else 
           { 
            NSLog(@"reading can't be started"); 
           } 
          } 
         }); 
    }]; 
} 


- (void) readNextMovieFrame 
{ 
    //NSLog(@"readNextMovieFrame called"); 
    if (_movieReader.status == AVAssetReaderStatusReading) 
    { 
     //NSLog(@"status is reading"); 

     AVAssetReaderTrackOutput * output = [_movieReader.outputs objectAtIndex:0]; 
     CMSampleBufferRef sampleBuffer = [output copyNextSampleBuffer]; 
     if (sampleBuffer) 
     { // I'm guessing this is the expensive part that we can skip if we want to skip frames 
      CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 

      // Lock the image buffer 
      CVPixelBufferLockBaseAddress(imageBuffer,0); 

      // Get information of the image 
      uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 
      size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
      size_t width = CVPixelBufferGetWidth(imageBuffer); 
      size_t height = CVPixelBufferGetHeight(imageBuffer); 

      // do my pixel analysis 

      // Unlock the image buffer 
      CVPixelBufferUnlockBaseAddress(imageBuffer,0); 
      CFRelease(sampleBuffer); 


      [self readNextMovieFrame]; 
     } 
     else 
     { 
      NSLog(@"could not copy next sample buffer. status is %d", _movieReader.status); 

      NSTimeInterval scanDuration = -[startTime timeIntervalSinceNow]; 

      float scanMultiplier = videoDuration/scanDuration; 

      NSString* info = [NSString stringWithFormat:@"Done\n\nvideo duration: %f seconds\nscan duration: %f seconds\nmultiplier: %f", videoDuration, scanDuration, scanMultiplier]; 

      [self performSelectorOnMainThread:@selector(updateInfo:) withObject:info waitUntilDone:YES]; 
     } 


    } 
    else 
    { 
     NSLog(@"status is now %d", _movieReader.status); 


    } 

} 


- (void) updateInfo: (id*)message 
{ 
    NSString* info = [NSString stringWithFormat:@"%@", message]; 

    [infoTextView setText:info]; 
} 
+0

你有沒有找到一個解決方案?我想做同樣的事情。 – GuruMeditation 2012-08-27 12:31:17

+0

都能跟得上 - 從來沒有:-( – 2012-08-28 18:51:05

回答

1

如果你想少精確的幀處理(而不是逐幀),你應該使用AVAssetImageGenerator

此類返回你問一個特定的時間框架。

具體來說,建立一個數組,填充時間與每次0.5秒差異的時間間隔(iPhone電影大約29.3 fps,如果你想每隔15幀大約每30秒一幀),讓圖像生成器返回你的框架。

因爲你可以看到你所要求的時間和框架的實際時間每幀。它的默認值是你問的0.5s左右寬容,但你也可以改變,通過改變性質:

requestedTimeToleranceBeforerequestedTimeToleranceAfter

我希望我回答你的問題, 好運。

+0

感謝Or.Ron我打這之前,但它仍然顯得相對緩慢這是我唯一的選擇。? – 2012-02-13 02:36:50