2013-01-22 63 views
7

我正在尋找一個啓動和停止音頻信號的按鈕,爲iOS構建一個令人難以置信的簡單應用程序。信號只是一個正弦波,它將在整個回放過程中檢查我的模型(音量的一個實例變量)並相應地更改其音量。iOS - 生成並播放無限期,簡單的音頻(正弦波)

我的困難與任務的不確定性有關。我瞭解如何構建表格,填充數據,響應按鈕等等;然而,當談到無限期地繼續下去(在這種情況下,聲音)時,我有點卡住了!任何指針都會很棒!

感謝您的閱讀。

+0

這可能是AVAudioPlayer是我需要開始... – Rogare

+1

如果您只是想播放預先創建的正弦波聲音文件(您將能夠控制音量,但沒有其他任何內容,例如頻率),AVAudioPlayer將成爲路線。 – admsyn

回答

15

這是一個簡單的應用程序,它將播放按需生成的頻率。你還沒有指定是否執行iOS或OSX,所以我已經選擇了OSX,因爲它稍微簡單一些(不會搞亂音頻會話類別)。如果您需要iOS,您可以通過查看音頻會話類別基礎知識並交換RemoteIO音頻單元的默認輸出音頻單元來找出缺失的位。

請注意,這樣做的目的純粹是爲了演示一些核心音頻/音頻單元的基礎知識。如果你想開始變得比這更復雜,你可能需要查看AUGraph API(爲了提供一個乾淨的例子,我沒有做任何錯誤檢查。總是在處理時做錯誤檢查覈心音頻)。

您需要將AudioToolboxAudioUnit框架添加到您的項目才能使用此代碼。

#import <AudioToolbox/AudioToolbox.h> 

@interface SWAppDelegate : NSObject <NSApplicationDelegate> 
{ 
    AudioUnit outputUnit; 
    double renderPhase; 
} 
@end 

@implementation SWAppDelegate 

- (void)applicationDidFinishLaunching:(NSNotification *)aNotification 
{ 
// First, we need to establish which Audio Unit we want. 

// We start with its description, which is: 
    AudioComponentDescription outputUnitDescription = { 
     .componentType   = kAudioUnitType_Output, 
     .componentSubType  = kAudioUnitSubType_DefaultOutput, 
     .componentManufacturer = kAudioUnitManufacturer_Apple 
    }; 

// Next, we get the first (and only) component corresponding to that description 
    AudioComponent outputComponent = AudioComponentFindNext(NULL, &outputUnitDescription); 

// Now we can create an instance of that component, which will create an 
// instance of the Audio Unit we're looking for (the default output) 
    AudioComponentInstanceNew(outputComponent, &outputUnit); 
    AudioUnitInitialize(outputUnit); 

// Next we'll tell the output unit what format our generated audio will 
// be in. Generally speaking, you'll want to stick to sane formats, since 
// the output unit won't accept every single possible stream format. 
// Here, we're specifying floating point samples with a sample rate of 
// 44100 Hz in mono (i.e. 1 channel) 
    AudioStreamBasicDescription ASBD = { 
     .mSampleRate  = 44100, 
     .mFormatID   = kAudioFormatLinearPCM, 
     .mFormatFlags  = kAudioFormatFlagsNativeFloatPacked, 
     .mChannelsPerFrame = 1, 
     .mFramesPerPacket = 1, 
     .mBitsPerChannel = sizeof(Float32) * 8, 
     .mBytesPerPacket = sizeof(Float32), 
     .mBytesPerFrame = sizeof(Float32) 
    }; 

    AudioUnitSetProperty(outputUnit, 
         kAudioUnitProperty_StreamFormat, 
         kAudioUnitScope_Input, 
         0, 
         &ASBD, 
         sizeof(ASBD)); 

// Next step is to tell our output unit which function we'd like it 
// to call to get audio samples. We'll also pass in a context pointer, 
// which can be a pointer to anything you need to maintain state between 
// render callbacks. We only need to point to a double which represents 
// the current phase of the sine wave we're creating. 
    AURenderCallbackStruct callbackInfo = { 
     .inputProc  = SineWaveRenderCallback, 
     .inputProcRefCon = &renderPhase 
    }; 

    AudioUnitSetProperty(outputUnit, 
         kAudioUnitProperty_SetRenderCallback, 
         kAudioUnitScope_Global, 
         0, 
         &callbackInfo, 
         sizeof(callbackInfo)); 

// Here we're telling the output unit to start requesting audio samples 
// from our render callback. This is the line of code that starts actually 
// sending audio to your speakers. 
    AudioOutputUnitStart(outputUnit); 
} 

// This is our render callback. It will be called very frequently for short 
// buffers of audio (512 samples per call on my machine). 
OSStatus SineWaveRenderCallback(void * inRefCon, 
           AudioUnitRenderActionFlags * ioActionFlags, 
           const AudioTimeStamp * inTimeStamp, 
           UInt32 inBusNumber, 
           UInt32 inNumberFrames, 
           AudioBufferList * ioData) 
{ 
    // inRefCon is the context pointer we passed in earlier when setting the render callback 
    double currentPhase = *((double *)inRefCon); 
    // ioData is where we're supposed to put the audio samples we've created 
    Float32 * outputBuffer = (Float32 *)ioData->mBuffers[0].mData; 
    const double frequency = 440.; 
    const double phaseStep = (frequency/44100.) * (M_PI * 2.); 

    for(int i = 0; i < inNumberFrames; i++) { 
     outputBuffer[i] = sin(currentPhase); 
     currentPhase += phaseStep; 
    } 

    // If we were doing stereo (or more), this would copy our sine wave samples 
    // to all of the remaining channels 
    for(int i = 1; i < ioData->mNumberBuffers; i++) { 
     memcpy(ioData->mBuffers[i].mData, outputBuffer, ioData->mBuffers[i].mDataByteSize); 
    } 

    // writing the current phase back to inRefCon so we can use it on the next call 
    *((double *)inRefCon) = currentPhase; 
    return noErr; 
} 

- (void)applicationWillTerminate:(NSNotification *)notification 
{ 
    AudioOutputUnitStop(outputUnit); 
    AudioUnitUninitialize(outputUnit); 
    AudioComponentInstanceDispose(outputUnit); 
} 

@end 

您可以隨意調用AudioOutputUnitStart()AudioOutputUnitStop()啓動/停止生產音頻。如果你想動態改變頻率,你可以傳遞一個指向struct的指針,該指針包含renderPhase double和另一個代表你想要的頻率。

請注意呈現回調。它是從實時線程調用的(不是與主運行循環相同的線程)。渲染回調受到一些相當嚴格的時間要求,這意味着有很多事情你不應該在回調做,如:

  • 從文件分配內存
  • 等待一個互斥
  • 閱讀在磁盤上
  • Objective-C的消息(是的,嚴重的。)

請注意,這是不是這樣做的唯一途徑。我只是用這種方式演示了它,因爲你已經標記了這個核心音頻。如果您不需要更改頻率,則只需使用帶有包含正弦波的預製聲音文件的AVAudioPlayer即可。

還有Novocaine,它隱藏了很多這種冗長的你。您還可以查看Audio Queue API,它與我編寫的Core Audio示例非常相似,但將其從硬件中分離出一點點(即對渲染回調的表現不太嚴格)。

+0

非常感謝!我在問題標題中加入了「iOS」,但很抱歉,我應該在問題主體中添加標籤和/或註釋。我現在要解決這個問題。 – Rogare

+0

@Rogare好點,我錯過了!我的目標實際上只是演示核心音頻中的一些概念,以便您可以開始使用。也就是說:如果你開始深入研究,你幾乎肯定會有更多的問題:p。祝你好運! – admsyn

+0

如何生成開啓信號或關閉信號? –