2012-02-20 61 views
4

我目前正在研究應用程序,作爲我的計算機科學學士學位的一部分。該應用程序將關聯來自iPhone硬件(加速度計,gps)和正在播放的音樂的數據。如何在iOS上使用AVAssetReader正確讀取已解碼的PCM樣本 - 目前解碼不正確

該項目尚處於起步階段,僅僅進行了2個月。

我現在正在和我需要幫助的那一刻正在從itunes庫中的歌曲中讀取PCM樣本,並使用它和音頻單元播放它們。 目前我想要工作的實現如下:從iTunes中選擇隨機歌曲,並在需要時從中讀取樣本,並存儲在緩衝區中,讓我們稱其爲sampleBuffer。稍後在消費者模型中,音頻單元(具有混音器和remoteIO輸出)具有回調,我只需將sampleBuffer中所需數量的樣本複製到回調中指定的緩衝區中。然後我通過揚聲器聽到的東西並不是我所期望的;我可以認識到,它正在播放這首歌曲,但它似乎是不正確的解碼,它有很多噪音!我附上了一張圖片,其中顯示了第一〜半秒(24576個樣本@ 44.1kHz),這並不像一個正常的輸出。 在我進入清單之前,我已檢查過該文件沒有被破壞,類似地,我已經爲緩衝區編寫了測試用例(所以我知道緩衝區不會改變樣本),儘管這可能不是最好的方法它(有些人會爭辯去音頻隊列路線),我想對樣本進行各種操作,並在完成之前更改歌曲,重新排列播放什麼歌曲等。此外,也許有一些不正確的設置在音頻單元,然而,顯示樣本(顯示樣本被錯誤解碼)的圖形是直接從緩衝區中取出的,因此我現在只想着解決爲什麼磁盤讀取和解碼無法正常工作。現在我只想通過工作來獲得一部戲。 廣東話發表圖片,因爲新來的StackOverflow因此,繼承人的鏈接圖片:http://i.stack.imgur.com/RHjlv.jpg

上市:

這是我的設置將被用於AVAssetReaderAudioMixOutput

// Set the read settings 
    audioReadSettings = [[NSMutableDictionary alloc] init]; 
    [audioReadSettings setValue:[NSNumber numberWithInt:kAudioFormatLinearPCM] 
         forKey:AVFormatIDKey]; 
    [audioReadSettings setValue:[NSNumber numberWithInt:16] forKey:AVLinearPCMBitDepthKey]; 
    [audioReadSettings setValue:[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsBigEndianKey]; 
    [audioReadSettings setValue:[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsFloatKey]; 
    [audioReadSettings setValue:[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsNonInterleaved]; 
    [audioReadSettings setValue:[NSNumber numberWithFloat:44100.0] forKey:AVSampleRateKey]; 

現在的audioReadSettigns下面的代碼清單是一種方法,接收NSString與歌曲的persistant_id:

-(BOOL)setNextSongID:(NSString*)persistand_id { 

assert(persistand_id != nil); 

MPMediaItem *song = [self getMediaItemForPersistantID:persistand_id]; 
NSURL *assetUrl = [song valueForProperty:MPMediaItemPropertyAssetURL]; 
AVURLAsset *songAsset = [AVURLAsset URLAssetWithURL:assetUrl 
              options:[NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] 
                       forKey:AVURLAssetPreferPreciseDurationAndTimingKey]]; 


NSError *assetError = nil; 

assetReader = [[AVAssetReader assetReaderWithAsset:songAsset error:&assetError] retain]; 

if (assetError) { 
    NSLog(@"error: %@", assetError); 
    return NO; 
} 

CMTimeRange timeRange = CMTimeRangeMake(kCMTimeZero, songAsset.duration); 
[assetReader setTimeRange:timeRange]; 

track = [[songAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0]; 

assetReaderOutput = [AVAssetReaderAudioMixOutput assetReaderAudioMixOutputWithAudioTracks:[NSArray arrayWithObject:track] 
                      audioSettings:audioReadSettings]; 

if (![assetReader canAddOutput:assetReaderOutput]) { 
    NSLog(@"cant add reader output... die!"); 
    return NO; 
} 

[assetReader addOutput:assetReaderOutput]; 
[assetReader startReading]; 

// just getting some basic information about the track to print 
NSArray *formatDesc = ((AVAssetTrack*)[[assetReaderOutput audioTracks] objectAtIndex:0]).formatDescriptions; 
for (unsigned int i = 0; i < [formatDesc count]; ++i) { 
    CMAudioFormatDescriptionRef item = (CMAudioFormatDescriptionRef)[formatDesc objectAtIndex:i]; 
    const CAStreamBasicDescription *asDesc = (CAStreamBasicDescription*)CMAudioFormatDescriptionGetStreamBasicDescription(item); 
    if (asDesc) { 
     // get data 
     numChannels = asDesc->mChannelsPerFrame; 
     sampleRate = asDesc->mSampleRate; 
     asDesc->Print(); 
    } 
} 
[self copyEnoughSamplesToBufferForLength:24000]; 
return YES; 
} 

下文介紹的功能 - (空)copyEnoughSamplesToBufferForLength:

-(void)copyEnoughSamplesToBufferForLength:(UInt32)samples_count { 

[w_lock lock]; 
int stillToCopy = 0; 
if (sampleBuffer->numSamples() < samples_count) { 
    stillToCopy = samples_count; 
} 

NSAutoreleasePool *apool = [[NSAutoreleasePool alloc] init]; 


CMSampleBufferRef sampleBufferRef; 
SInt16 *dataBuffer = (SInt16*)malloc(8192 * sizeof(SInt16)); 

int a = 0; 

while (stillToCopy > 0) { 

    sampleBufferRef = [assetReaderOutput copyNextSampleBuffer]; 
    if (!sampleBufferRef) { 
     // end of song or no more samples 
     return; 
    } 

    CMBlockBufferRef blockBuffer = CMSampleBufferGetDataBuffer(sampleBufferRef); 
    CMItemCount numSamplesInBuffer = CMSampleBufferGetNumSamples(sampleBufferRef); 
    AudioBufferList audioBufferList; 

    CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBufferRef, 
                  NULL, 
                  &audioBufferList, 
                  sizeof(audioBufferList), 
                  NULL, 
                  NULL, 
                  0, 
                  &blockBuffer); 

    int data_length = floorf(numSamplesInBuffer * 1.0f); 

    int j = 0; 

    for (int bufferCount=0; bufferCount < audioBufferList.mNumberBuffers; bufferCount++) { 
     SInt16* samples = (SInt16 *)audioBufferList.mBuffers[bufferCount].mData; 
     for (int i=0; i < numSamplesInBuffer; i++) { 
      dataBuffer[j] = samples[i]; 
      j++; 
     } 
    } 

    CFRelease(sampleBufferRef); 
    sampleBuffer->putSamples(dataBuffer, j); 
    stillToCopy = stillToCopy - data_length; 
} 

free(dataBuffer); 
[w_lock unlock]; 
[apool release]; 
} 

現在sampleBuffer將沒有正確解碼的樣本。任何人都可以幫助我爲什麼這是這樣嗎?這發生在我的iTunes庫上的不同文件(mp3,aac,wav等)。 任何幫助將不勝感激,此外,如果您需要我的代碼的任何其他列表,或者可能是什麼輸出聽起來像,我會附加它每個請求。在過去的一週裏,我一直在試圖調試它,並沒有在網上找到任何幫助 - 每個人似乎都以我的方式去做,但似乎只有我有這個問題。

感謝您的幫助!

彼得

回答

10

目前,我也在研究一個涉及從iTunes資源庫中提取音頻樣本到AudioUnit的項目。

包含audiounit渲染回調供您參考。輸入格式設置爲SInt16StereoStreamFormat。

我已經使用了邁克爾泰森的循環緩衝區實現 - TPCircularBuffer作爲緩衝區存儲。非常容易使用和理解!謝謝邁克爾!

- (void) loadBuffer:(NSURL *)assetURL_ 
{ 
    if (nil != self.iPodAssetReader) { 
     [iTunesOperationQueue cancelAllOperations]; 

     [self cleanUpBuffer]; 
    } 

    NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys: 
            [NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey, 
            [NSNumber numberWithFloat:44100.0], AVSampleRateKey, 
            [NSNumber numberWithInt:16], AVLinearPCMBitDepthKey, 
            [NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved, 
            [NSNumber numberWithBool:NO], AVLinearPCMIsFloatKey, 
            [NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey, 
            nil]; 

    AVURLAsset *asset = [AVURLAsset URLAssetWithURL:assetURL_ options:nil]; 
    if (asset == nil) { 
     NSLog(@"asset is not defined!"); 
     return; 
    } 

    NSLog(@"Total Asset Duration: %f", CMTimeGetSeconds(asset.duration)); 

    NSError *assetError = nil; 
    self.iPodAssetReader = [AVAssetReader assetReaderWithAsset:asset error:&assetError]; 
    if (assetError) { 
     NSLog (@"error: %@", assetError); 
     return; 
    } 

    AVAssetReaderOutput *readerOutput = [AVAssetReaderAudioMixOutput assetReaderAudioMixOutputWithAudioTracks:asset.tracks audioSettings:outputSettings]; 

    if (! [iPodAssetReader canAddOutput: readerOutput]) { 
     NSLog (@"can't add reader output... die!"); 
     return; 
    } 

    // add output reader to reader 
    [iPodAssetReader addOutput: readerOutput]; 

    if (! [iPodAssetReader startReading]) { 
     NSLog(@"Unable to start reading!"); 
     return; 
    } 

    // Init circular buffer 
    TPCircularBufferInit(&playbackState.circularBuffer, kTotalBufferSize); 

    __block NSBlockOperation * feediPodBufferOperation = [NSBlockOperation blockOperationWithBlock:^{ 
     while (![feediPodBufferOperation isCancelled] && iPodAssetReader.status != AVAssetReaderStatusCompleted) { 
      if (iPodAssetReader.status == AVAssetReaderStatusReading) { 
       // Check if the available buffer space is enough to hold at least one cycle of the sample data 
       if (kTotalBufferSize - playbackState.circularBuffer.fillCount >= 32768) { 
        CMSampleBufferRef nextBuffer = [readerOutput copyNextSampleBuffer]; 

        if (nextBuffer) { 
         AudioBufferList abl; 
         CMBlockBufferRef blockBuffer; 
         CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(nextBuffer, NULL, &abl, sizeof(abl), NULL, NULL, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, &blockBuffer); 
         UInt64 size = CMSampleBufferGetTotalSampleSize(nextBuffer); 

         int bytesCopied = TPCircularBufferProduceBytes(&playbackState.circularBuffer, abl.mBuffers[0].mData, size); 

         if (!playbackState.bufferIsReady && bytesCopied > 0) { 
          playbackState.bufferIsReady = YES; 
         } 

         CFRelease(nextBuffer); 
         CFRelease(blockBuffer); 
        } 
        else { 
         break; 
        } 
       } 
      } 
     } 
     NSLog(@"iPod Buffer Reading Finished"); 
    }]; 

    [iTunesOperationQueue addOperation:feediPodBufferOperation]; 
} 

static OSStatus ipodRenderCallback (

            void      *inRefCon,  // A pointer to a struct containing the complete audio data 
            // to play, as well as state information such as the 
            // first sample to play on this invocation of the callback. 
            AudioUnitRenderActionFlags *ioActionFlags, // Unused here. When generating audio, use ioActionFlags to indicate silence 
            // between sounds; for silence, also memset the ioData buffers to 0. 
            const AudioTimeStamp  *inTimeStamp, // Unused here. 
            UInt32      inBusNumber, // The mixer unit input bus that is requesting some new 
            //  frames of audio data to play. 
            UInt32      inNumberFrames, // The number of frames of audio to provide to the buffer(s) 
            //  pointed to by the ioData parameter. 
            AudioBufferList    *ioData   // On output, the audio data to play. The callback's primary 
            //  responsibility is to fill the buffer(s) in the 
            //  AudioBufferList. 
            ) 
{ 
    Audio* audioObject = (Audio*)inRefCon; 

    AudioSampleType *outSample   = (AudioSampleType *)ioData->mBuffers[0].mData; 

    // Zero-out all the output samples first 
    memset(outSample, 0, inNumberFrames * kUnitSize * 2); 

    if (audioObject.playingiPod && audioObject.bufferIsReady) { 
     // Pull audio from circular buffer 
     int32_t availableBytes; 

     AudioSampleType *bufferTail  = TPCircularBufferTail(&audioObject.circularBuffer, &availableBytes); 

     memcpy(outSample, bufferTail, MIN(availableBytes, inNumberFrames * kUnitSize * 2)); 
     TPCircularBufferConsume(&audioObject.circularBuffer, MIN(availableBytes, inNumberFrames * kUnitSize * 2)); 
     audioObject.currentSampleNum += MIN(availableBytes/(kUnitSize * 2), inNumberFrames); 

     if (availableBytes <= inNumberFrames * kUnitSize * 2) { 
      // Buffer is running out or playback is finished 
      audioObject.bufferIsReady = NO; 
      audioObject.playingiPod = NO; 
      audioObject.currentSampleNum = 0; 

      if ([[audioObject delegate] respondsToSelector:@selector(playbackDidFinish)]) { 
       [[audioObject delegate] performSelector:@selector(playbackDidFinish)]; 
      } 
     } 
    } 

    return noErr; 
} 

- (void) setupSInt16StereoStreamFormat { 

    // The AudioUnitSampleType data type is the recommended type for sample data in audio 
    // units. This obtains the byte size of the type for use in filling in the ASBD. 
    size_t bytesPerSample = sizeof (AudioSampleType); 

    // Fill the application audio format struct's fields to define a linear PCM, 
    //  stereo, noninterleaved stream at the hardware sample rate. 
    SInt16StereoStreamFormat.mFormatID   = kAudioFormatLinearPCM; 
    SInt16StereoStreamFormat.mFormatFlags  = kAudioFormatFlagsCanonical; 
    SInt16StereoStreamFormat.mBytesPerPacket = 2 * bytesPerSample; // *** kAudioFormatFlagsCanonical <- implicit interleaved data => (left sample + right sample) per Packet 
    SInt16StereoStreamFormat.mFramesPerPacket = 1; 
    SInt16StereoStreamFormat.mBytesPerFrame  = SInt16StereoStreamFormat.mBytesPerPacket * SInt16StereoStreamFormat.mFramesPerPacket; 
    SInt16StereoStreamFormat.mChannelsPerFrame = 2;     // 2 indicates stereo 
    SInt16StereoStreamFormat.mBitsPerChannel = 8 * bytesPerSample; 
    SInt16StereoStreamFormat.mSampleRate  = graphSampleRate; 


    NSLog (@"The stereo stream format for the \"iPod\" mixer input bus:"); 
    [self printASBD: SInt16StereoStreamFormat]; 
} 
+0

非常感謝!真的有幫助! – Peter 2012-03-05 13:34:40

+0

什麼是kUnitSize?什麼是kTotalBufferSize? – 2012-07-25 03:54:14

+1

@smartfaceweb:就我而言,我已經使用了以下設置 '#定義kUnitSize的sizeof(AudioSampleType) 的#define kBufferUnit 655360 的#define kTotalBufferSize kBufferUnit * kUnitSize' – infiniteloop 2012-07-26 03:49:40

0

如果我是你,我會要麼使用kAudioUnitSubType_AudioFilePlayer播放的文件,並與單位訪問其樣本呈現回調。

或者

使用ExtAudioFileRef的樣品提取直緩衝區。

+0

AudioFilePlayer允許我只指定一個文件播放,而且它不能來自iTunes。 ExtAudioFileRef也使用不允許從iTunes訪問的音頻會話(或至少我不能讓它工作)。有沒有人執行過類似的事情來幫助我?請 – Peter 2012-03-01 16:34:42

+0

我沒有太多的經驗與itune庫恐怕。這有幫助嗎? http://www.subfurther.com/blog/2010/12/13/from-ipod-library-to-pcm-samples-in-far-fewer-steps-than-were-previously-necessary/ – dubbeat 2012-03-02 09:56:49