2011-05-21 77 views
4

我需要周圍的音頻文件,尋求並拉出塊。我正在嘗試使用AVAssetReader。我看到的錯誤是,如果我從不同的偏移量讀取音頻一段時間,我得到的平均值(塊)是不同的。AVAssetReader尋求

例如,如果我讀出聲音從0.1秒到0.5秒,我得到不同的我收到的塊是不同的,如果我從0.2讀到0.5秒

以下是演示它

一個代碼示例
#import <AudioToolbox/AudioToolbox.h> 
#import <AVFoundation/AVFoundation.h> 
#import <MediaPlayer/MediaPlayer.h> 

+ (void) test 
{ 
    NSURL* path = [[NSBundle mainBundle] URLForResource:@"music" withExtension:@"mp3"]; 

    [self test:path sample:1 showChunks:5]; 
    [self test:path sample:2 showChunks:4]; 
    [self test:path sample:3 showChunks:3]; 
} 

+(void) test:(NSURL*) url sample:(NSInteger) sample showChunks:(NSInteger) chunkCount 
{ 
#define CHUNK 800 
#define SAMPLE_RATE 8000 
    AVURLAsset* asset = [AVURLAsset URLAssetWithURL:url options:nil]; 
    NSError *assetError = nil; 
    AVAssetReader* assetReader = [AVAssetReader assetReaderWithAsset:asset error:&assetError]; 

    CMTime startTime = CMTimeMake(sample*CHUNK, SAMPLE_RATE); 
    CMTimeShow(startTime); 

    CMTimeRange timeRange = CMTimeRangeMake(startTime, kCMTimePositiveInfinity); 
    assetReader.timeRange = timeRange; 

    NSDictionary* dict = nil; 
    dict = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInteger:SAMPLE_RATE], AVSampleRateKey, [NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey, nil]; 


    AVAssetReaderAudioMixOutput* assetReaderOutput = [AVAssetReaderAudioMixOutput assetReaderAudioMixOutputWithAudioTracks:asset.tracks audioSettings: dict]; 
    if (! [assetReader canAddOutput: assetReaderOutput]) { 
     NSLog (@"error: Cannot add output reader"); 
     assetReader = nil; 
     return; 
    } 

    [assetReader addOutput: assetReaderOutput]; 

    [assetReader startReading]; 

    CMSampleBufferRef nextBuffer; 

    if (!(nextBuffer = [assetReaderOutput copyNextSampleBuffer])) 
    { 
     return; 
    } 
    CMSampleBufferGetTotalSampleSize (nextBuffer); 
    // Extract bytes from buffer 
    CMBlockBufferRef dataBuffer = CMSampleBufferGetDataBuffer(nextBuffer); 

    NSInteger len = CMBlockBufferGetDataLength(dataBuffer); 
    if (len < chunkCount*CHUNK) 
    { 
     printf("CHUNK is to big"); 
     return; 
    } 
    UInt8* buf = malloc(len); 
    CMBlockBufferCopyDataBytes(dataBuffer, 0, len, buf); 

    for (int ii = 0; ii < chunkCount*CHUNK; ii+=CHUNK) 
    { 
     CGFloat av = 0; 
     for (int jj = 0; jj < CHUNK; jj++) 
     { 
      av += (CGFloat) buf[jj+ii]; 
     } 
     printf("Time: %f av: %f\n", (CGFloat)(ii+CHUNK*sample)/(CGFloat)SAMPLE_RATE, av/(CGFloat)CHUNK); 
    } 
    printf("\n"); 

    free(buf); 


} 

這是輸出

{800/8000 = 0.100} 
Time: 0.100000 av: 149.013748 
Time: 0.200000 av: 100.323753 
Time: 0.300000 av: 146.991257 
Time: 0.400000 av: 106.763748 
Time: 0.500000 av: 145.020004 

{1600/8000 = 0.200} 
Time: 0.200000 av: 145.011246 
Time: 0.300000 av: 110.718750 
Time: 0.400000 av: 154.543747 
Time: 0.500000 av: 112.025002 

{2400/8000 = 0.300} 
Time: 0.300000 av: 149.278748 
Time: 0.400000 av: 104.477501 
Time: 0.500000 av: 158.162506 

請幫助

回答

6

在我看來,你的問題是假設下面的代碼準確地試圖開始時間:

CMTimeRange timeRange = CMTimeRangeMake(startTime, kCMTimePositiveInfinity); 
assetReader.timeRange = timeRange; 

爲此,可以使用到

CMSampleBufferGetOutputPresentationTimeStamp(nextBuffer)的呼叫測試;

從此,你將能夠看到的緩衝區開始的準確時間(單位:秒)。

0

在我自己的經驗,尋求與

assetReader.timeRange = CMTimeRangeMake(CMTimeMake(sample, sample_rate), kCMTimePositiveInfinity) 

完美。尋找沒有精確的問題。

你可能遇到的是淡入問題:其實AVAssetReader似乎第1024個樣本淡入(更也許有點)。 我通過在我真正想要讀取的位置之前讀取1024個樣本來修復它,然後跳過這1024個樣本。