2017-06-27 152 views
0

我需要使用Objective C在iOS應用中進行音頻流式傳輸。我使用AVFoundation框架並從麥克風捕獲原始數據併發送到服務器。但是,我收到的原始數據是腐敗的,下面是我的代碼。音頻使用iOS中的音頻隊列/緩衝區流式傳輸AVFoundation

請建議我在哪裏做錯了。

session = [[AVCaptureSession alloc] init]; 

NSDictionary *recordSettings = [NSDictionary dictionaryWithObjectsAndKeys: 
           [NSNumber numberWithInt:kAudioFormatLinearPCM],AVFormatIDKey, 
           [NSNumber numberWithFloat:16000.0], AVSampleRateKey, 
           [NSNumber numberWithInt: 1],AVNumberOfChannelsKey, 
           [NSNumber numberWithInt:32], AVLinearPCMBitDepthKey, 
           [NSNumber numberWithBool:NO],AVLinearPCMIsBigEndianKey, 
           [NSNumber numberWithBool:NO], AVLinearPCMIsFloatKey, 
           [NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved, 
           nil]; 


AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio]; 
AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:nil]; 
[session addInput:audioInput]; 

AVCaptureAudioDataOutput *audioDataOutput = [[AVCaptureAudioDataOutput alloc] init]; 
dispatch_queue_t audioQueue = dispatch_queue_create("AudioQueue", NULL); 
[audioDataOutput setSampleBufferDelegate:self queue:audioQueue]; 

AVAssetWriterInput *_assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:recordSettings]; 
_assetWriterVideoInput.performsMultiPassEncodingIfSupported = YES; 

if([session canAddOutput:audioDataOutput]){ 
    [session addOutput:audioDataOutput]; 
} 
[session startRunning]; 

捕捉:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{ 
    AudioBufferList audioBufferList; 
    NSMutableData *data= [NSMutableData data]; 
    CMBlockBufferRef blockBuffer; 
    CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer); 

    for(int y=0; y< audioBufferList.mNumberBuffers; y++){ 

     AudioBuffer audioBuffer = audioBufferList.mBuffers[y]; 
     Float32 *frame = (Float32*)audioBuffer.mData; 
     [data appendBytes:frame length:audioBuffer.mDataByteSize]; 

     NSString *base64Encoded = [data base64EncodedStringWithOptions:0]; 
     NSLog(@"Encoded: %@", base64Encoded); 

    } 

    CFRelease(blockBuffer); 
} 

回答

0

我貼你需要做這項工作的那種代碼的樣本。其方法與您的方法幾乎相同。您應該能夠輕鬆閱讀。

該應用程序使用AudioUnit錄製和播放麥克風輸入和揚聲器輸出,NSNetServices連接網絡上的兩個iOS設備,以及NSStreams在設備之間發送音頻流。

您可以下載源代碼:

https://drive.google.com/open?id=1tKgVl0X92SYvgpvbljRzilXNQ6iBcjqM

它需要最新的Xcode 9 beta版編譯,以及最新的iOS 11測試版運行它。

注意|每個方法調用和事件的日誌條目顯示在包含整個屏幕的文本字段中;沒有交互界面 - 沒有按鈕等。在兩個iOS設備上安裝應用程序後,只需在兩個設備上啓動它即可自動連接到您的網絡並開始流式傳輸音頻。

enter image description here