2014-01-20 69 views
2

創建AudioBuffer /音頻我在流應用初學者,我從AudioBuffer創建的NSData和我發送NSData的客戶端(接收器)。但我不知道如何將NSdata轉換爲音頻緩衝區。如何從NSData的

我使用下面的代碼AudioBuffer轉換爲NSData的(這是工作好)

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection 
{    
AudioStreamBasicDescription audioFormat; 
memset(&audioFormat, 0, sizeof(audioFormat)); 
audioFormat.mSampleRate = 8000.0; 
audioFormat.mFormatID = kAudioFormatiLBC; 
audioFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked | kAudioFormatFlagIsAlignedHigh; 
audioFormat.mFramesPerPacket = 1; 
audioFormat.mChannelsPerFrame = 1; 
audioFormat.mBitsPerChannel = 16; 
audioFormat.mBytesPerPacket = 2; 
audioFormat.mReserved = 0; 
audioFormat.mBytesPerFrame = audioFormat.mBytesPerPacket = audioFormat.mChannelsPerFrame* sizeof(SInt16); 

AudioBufferList audioBufferList; 
NSMutableData *data=[[NSMutableData alloc] init]; 
CMBlockBufferRef blockBuffer; 
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer); 
    for(int y=0; y<audioBufferList.mNumberBuffers; y++) 
    { 
     AudioBuffer audioBuffer = audioBufferList.mBuffers[y]; 
     Float32 *frame = (Float32*)audioBuffer.mData; 
     [data appendBytes:frame length:audioBuffer.mDataByteSize]; 
    } 
} 

如果這不是正確的方法,那麼請幫我....感謝。

+0

嘿,你能做到嗎?如果是這樣,請發佈您的解決方案。謝謝,我正在努力解決同樣的問題。 – moenad

+0

@Sojan - 你能夠以某種方式將數據轉換回CMSampleBufferRef嗎?或者,您能否引導一些適合您的資源/方法? –

回答

0

這是我用我的音頻數據(音頻文件)轉換成浮點表示並保存到一個代碼array.firstly我得到的音頻數據轉換成AudioBufferList然後獲取音頻數據的浮點值。檢查下面的代碼,如果它幫助

-(void) PrintFloatDataFromAudioFile { 

NSString * name = @"Filename"; //YOUR FILE NAME 
NSString * source = [[NSBundle mainBundle] pathForResource:name ofType:@"m4a"]; // SPECIFY YOUR FILE FORMAT 

const char *cString = [source cStringUsingEncoding:NSASCIIStringEncoding]; 

CFStringRef str = CFStringCreateWithCString(
              NULL, 
              cString, 
              kCFStringEncodingMacRoman 
              ); 
CFURLRef inputFileURL = CFURLCreateWithFileSystemPath(
                 kCFAllocatorDefault, 
                 str, 
                 kCFURLPOSIXPathStyle, 
                 false 
                ); 

ExtAudioFileRef fileRef; 
ExtAudioFileOpenURL(inputFileURL, &fileRef); 


    AudioStreamBasicDescription audioFormat; 
audioFormat.mSampleRate = 44100; // GIVE YOUR SAMPLING RATE 
audioFormat.mFormatID = kAudioFormatLinearPCM; 
audioFormat.mFormatFlags = kLinearPCMFormatFlagIsFloat; 
audioFormat.mBitsPerChannel = sizeof(Float32) * 8; 
audioFormat.mChannelsPerFrame = 1; // Mono 
audioFormat.mBytesPerFrame = audioFormat.mChannelsPerFrame * sizeof(Float32); // == sizeof(Float32) 
audioFormat.mFramesPerPacket = 1; 
audioFormat.mBytesPerPacket = audioFormat.mFramesPerPacket * audioFormat.mBytesPerFrame; // = sizeof(Float32) 

// 3) Apply audio format to the Extended Audio File 
ExtAudioFileSetProperty(
         fileRef, 
         kExtAudioFileProperty_ClientDataFormat, 
         sizeof (AudioStreamBasicDescription), //= audioFormat 
         &audioFormat); 

int numSamples = 1024; //How many samples to read in at a time 
UInt32 sizePerPacket = audioFormat.mBytesPerPacket; // = sizeof(Float32) = 32bytes 
UInt32 packetsPerBuffer = numSamples; 
UInt32 outputBufferSize = packetsPerBuffer * sizePerPacket; 

// So the lvalue of outputBuffer is the memory location where we have reserved space 
UInt8 *outputBuffer = (UInt8 *)malloc(sizeof(UInt8 *) * outputBufferSize); 



AudioBufferList convertedData ;//= malloc(sizeof(convertedData)); 

convertedData.mNumberBuffers = 1; // Set this to 1 for mono 
convertedData.mBuffers[0].mNumberChannels = audioFormat.mChannelsPerFrame; //also = 1 
convertedData.mBuffers[0].mDataByteSize = outputBufferSize; 
convertedData.mBuffers[0].mData = outputBuffer; // 

UInt32 frameCount = numSamples; 
float *samplesAsCArray; 
int j =0; 
    double floatDataArray[882000] ; // SPECIFY YOUR DATA LIMIT MINE WAS 882000 , SHOULD BE EQUAL TO OR MORE THAN DATA LIMIT 

while (frameCount > 0) { 
    ExtAudioFileRead(
        fileRef, 
        &frameCount, 
        &convertedData 
        ); 
    if (frameCount > 0) { 
     AudioBuffer audioBuffer = convertedData.mBuffers[0]; 
     samplesAsCArray = (float *)audioBuffer.mData; // CAST YOUR mData INTO FLOAT 

     for (int i =0; i<1024 /*numSamples */; i++) { //YOU CAN PUT numSamples INTEAD OF 1024 

      floatDataArray[j] = (double)samplesAsCArray[i] ; //PUT YOUR DATA INTO FLOAT ARRAY 
       printf("\n%f",floatDataArray[j]); //PRINT YOUR ARRAY'S DATA IN FLOAT FORM RANGING -1 TO +1 
      j++; 


     } 
    } 
}} 
+0

THX你的答案 –

+0

AudioBuffer到NSData的正常工作。但我的問題是如何的NSData轉換爲音頻/播放音頻 –

0

我用下面的代碼片段,以NSData的轉換(在我的800個字節的數據包的情況,但可以說可以是任何尺寸)到AudioBufferList:

-(AudioBufferList *) getBufferListFromData: (NSData *) data 
{ 
     if (data.length > 0) 
     { 
      NSUInteger len = [data length]; 
      //I guess you can use Byte*, void* or Float32*. I am not sure if that makes any difference. 
      Byte * byteData = (Byte*) malloc (len); 
      memcpy (byteData, [data bytes], len); 
      if (byteData) 
      { 
       AudioBufferList * theDataBuffer =(AudioBufferList*)malloc(sizeof(AudioBufferList) * 1); 
       theDataBuffer->mNumberBuffers = 1; 
       theDataBuffer->mBuffers[0].mDataByteSize = len; 
       theDataBuffer->mBuffers[0].mNumberChannels = 1; 
       theDataBuffer->mBuffers[0].mData = byteData; 
       // Read the data into an AudioBufferList 
       return theDataBuffer; 
      } 
     } 
     return nil; 
} 
+0

那你怎麼玩AudioBufferList?或將其轉換回音頻緩衝區? –

+0

我有播放音頻但語音不清除我如何管理這些? –

4

您可以使用下面的代碼創建從CMSampleBufferRefNSData,然後用AVAudioPlayer玩。

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { 

    AudioBufferList audioBufferList; 
    NSMutableData *data= [NSMutableData data]; 
    CMBlockBufferRef blockBuffer; 
    CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer); 

    for(int y=0; y< audioBufferList.mNumberBuffers; y++){ 

     AudioBuffer audioBuffer = audioBufferList.mBuffers[y]; 
     Float32 *frame = (Float32*)audioBuffer.mData; 

     [data appendBytes:frame length:audioBuffer.mDataByteSize]; 

    } 

    CFRelease(blockBuffer); 
    CFRelease(ref); 

    AVAudioPlayer *player = [[AVAudioPlayer alloc] initWithData:data error:nil]; 
    [player play]; 
} 
+0

它是如何工作的任何人?帶有以上數據的AVAudioPlayer返回nil,並顯示以下錯誤: '錯誤=錯誤域= NSOSStatusErrorDomain代碼= 1954115647「操作無法完成(OSStatus錯誤1954115647。)」' 您有修復嗎? –

+0

我有同樣的錯誤:錯誤域= NSOSStatusErrorDomain代碼= 1954115647「(空)」試圖初始化播放器時。有時它能夠初始化播放器,但是我聽不到任何聲音。你有提示嗎? – Vincenzo

2

這就是我如何做到的,以防其他人陷入同一問題。您不需要從AudioBufferList中獲取數據,而是按原樣使用它。爲了再次從NSData中創建AudioBufferList,我還需要一些樣本信息,所以我在實際數據之前添加了它。

下面是如何將數據從CMSampleBufferRef的:

AudioBufferList audioBufferList; 
CMBlockBufferRef blockBuffer; 
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer); 
CMItemCount numSamples = CMSampleBufferGetNumSamples(sampleBuffer);   
NSUInteger size = sizeof(audioBufferList); 
char buffer[size + 4]; 
((int*)buffer)[0] = (int)numSamples; 
memcpy(buffer +4, &audioBufferList, size); 
//This is the Audio data. 
NSData *bufferData = [NSData dataWithBytes:buffer length:size + 4]; 

這是你如何創建AudioSampleBufferRef出這個數據:

const void *buffer = [bufferData bytes]; 
buffer = (char *)buffer; 

CMSampleBufferRef sampleBuffer = NULL; 
OSStatus status = -1; 

/* Format Description */ 
AudioStreamBasicDescription audioFormat; 
audioFormat.mSampleRate = 44100.00; 
audioFormat.mFormatID = kAudioFormatLinearPCM; 
audioFormat.mFormatFlags = 0xc; 
audioFormat.mBytesPerPacket= 2; 
audioFormat.mFramesPerPacket= 1; 
audioFormat.mBytesPerFrame= 2; 
audioFormat.mChannelsPerFrame= 1; 
audioFormat.mBitsPerChannel= 16; 
audioFormat.mReserved= 0; 

CMFormatDescriptionRef format = NULL; 
status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &audioFormat, 0, nil, 0, nil, nil, &format); 

CMFormatDescriptionRef formatdes = NULL; 
status = CMFormatDescriptionCreate(NULL, kCMMediaType_Audio, 'lpcm', NULL, &formatdes); 
if (status != noErr) 
{ 
    NSLog(@"Error in CMAudioFormatDescriptionCreater"); 
    return; 
} 

/* Create sample Buffer */ 
CMSampleTimingInfo timing = {.duration= CMTimeMake(1, 44100), .presentationTimeStamp= kCMTimeZero, .decodeTimeStamp= kCMTimeInvalid}; 
CMItemCount framesCount  = ((int*)buffer)[0]; 

status = CMSampleBufferCreate(kCFAllocatorDefault, nil , NO,nil,nil,format, framesCount, 1, &timing, 0, nil, &sampleBuffer); 

if(status != noErr) 
{ 
    NSLog(@"Error in CMSampleBufferCreate"); 
    return; 
} 

/* Copy BufferList to Sample Buffer */ 
AudioBufferList receivedAudioBufferList; 
memcpy(&receivedAudioBufferList, buffer + 4, sizeof(receivedAudioBufferList)); 

status = CMSampleBufferSetDataBufferFromAudioBufferList(sampleBuffer, kCFAllocatorDefault , kCFAllocatorDefault, 0, &receivedAudioBufferList); 
if (status != noErr) { 
    NSLog(@"Error in CMSampleBufferSetDataBufferFromAudioBufferList"); 
    return; 
} 
//Use your sampleBuffer. 

讓我知道的任何問題。

+0

爲什麼'size + 4'?你爲什麼追加4? –

+0

我想我正在使用前4個字節的緩衝區來傳輸一個int,我正在使用它來獲取更多信息。 –

+0

你可以看看我的親戚[問題](https://stackoverflow.com/questions/46908485/deep-copy-of-audio-cmsamplebuffer)與賞金啓用? –