2015-04-03 56 views
5

我試圖使用新AVAudioEngine iOS中8completionHandler()被調用太早

它看起來像player.scheduleFile的completionHandler()被調用前的聲音文件完成比賽。

我使用長度爲5s的聲音文件 - 和println() - 在聲音結束前約1秒出現消息。

我做錯了什麼或者我誤解了completionHandler的想法?

謝謝!


下面是一些代碼:

class SoundHandler { 
    let engine:AVAudioEngine 
    let player:AVAudioPlayerNode 
    let mainMixer:AVAudioMixerNode 

    init() { 
     engine = AVAudioEngine() 
     player = AVAudioPlayerNode() 
     engine.attachNode(player) 
     mainMixer = engine.mainMixerNode 

     var error:NSError? 
     if !engine.startAndReturnError(&error) { 
      if let e = error { 
       println("error \(e.localizedDescription)") 
      } 
     } 

     engine.connect(player, to: mainMixer, format: mainMixer.outputFormatForBus(0)) 
    } 

    func playSound() { 
     var soundUrl = NSBundle.mainBundle().URLForResource("Test", withExtension: "m4a") 
     var soundFile = AVAudioFile(forReading: soundUrl, error: nil) 

     player.scheduleFile(soundFile, atTime: nil, completionHandler: { println("Finished!") }) 

     player.play() 
    } 
} 

回答

6

這看起來像一個bug,我們應該提交雷達提交! http://bugreport.apple.com

與此同時,作爲一種解決方法,我注意到如果您改用scheduleBuffer:atTime:options:completionHandler:,則會按預期觸發回調(播放完成後)。

示例代碼:

AVAudioFile *file = [[AVAudioFile alloc] initForReading:_fileURL commonFormat:AVAudioPCMFormatFloat32 interleaved:NO error:nil]; 
AVAudioPCMBuffer *buffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:file.processingFormat frameCapacity:(AVAudioFrameCount)file.length]; 
[file readIntoBuffer:buffer error:&error]; 

[_player scheduleBuffer:buffer atTime:nil options:AVAudioPlayerNodeBufferInterrupts completionHandler:^{ 
    // reminder: we're not on the main thread in here 
    dispatch_async(dispatch_get_main_queue(), ^{ 
     NSLog(@"done playing, as expected!"); 
    }); 
}]; 
+0

作品作爲一種解決方法。謝謝! – Oliver 2015-04-21 17:44:12

+0

很好的解決方法。 – 2015-06-13 21:40:27

+0

喜歡它。奇蹟般有效! – 2016-01-31 21:43:12

6

我看到相同的行爲。

從我的實驗中,我認爲一旦緩衝區/段/文件已被「調度」,而不是完成播放時,就會調用回調。

儘管文檔明確指出: 「在緩衝區完全播放或播放器停止後調用,可能爲零」。

所以我認爲這是一個錯誤或不正確的文檔。不知道哪個

4

您可以隨時計算出未來的時候,音頻播放將完成,使用AVAudioTime。當前的行爲很有用,因爲它支持在當前緩衝區/段/文件結束之前調度附加的緩衝區/段/文件以從回調播放,避免音頻播放出現間隙。這可以讓您創建一個簡單的循環播放器,而無需太多工作。下面是一個例子:

class Latch { 
    var value : Bool = true 
} 

func loopWholeFile(file : AVAudioFile, player : AVAudioPlayerNode) -> Latch { 
    let looping = Latch() 
    let frames = file.length 

    let sampleRate = file.processingFormat.sampleRate 
    var segmentTime : AVAudioFramePosition = 0 
    var segmentCompletion : AVAudioNodeCompletionHandler! 
    segmentCompletion = { 
     if looping.value { 
      segmentTime += frames 
      player.scheduleFile(file, atTime: AVAudioTime(sampleTime: segmentTime, atRate: sampleRate), completionHandler: segmentCompletion) 
     } 
    } 
    player.scheduleFile(file, atTime: AVAudioTime(sampleTime: segmentTime, atRate: sampleRate), completionHandler: segmentCompletion) 
    segmentCompletion() 
    player.play() 

    return looping 
} 

上面的代碼在調用player.play()之前調度整個文件兩次。隨着每個細分接近完成,它將在未來安排另一個整個文件,以避免播放出現空白。要停止循環,請使用返回值Latch,如下所示:

let looping = loopWholeFile(file, player) 
sleep(1000) 
looping.value = false 
player.stop() 
0

是的,它在文件(或緩衝區)完成之前會稍微調用。如果您在完成處理程序中調用[myNode stop],則文件(或緩衝區)不會完全完成。但是,如果您調用[myEngine stop],則文件(或緩衝區)將完成到末尾

1

我的錯誤報告已關閉爲「按預期工作」,但Apple向我指出了scheduleFile的新變體, iOS 11中的scheduleSegment和scheduleBuffer方法。這些補充一點,你可以用它來指定要完成回調時回放完成completionCallbackType說法:

[self.audioUnitPlayer 
      scheduleSegment:self.audioUnitFile 
      startingFrame:sampleTime 
      frameCount:(int)sampleLength 
      atTime:0 
      completionCallbackType:AVAudioPlayerNodeCompletionDataPlayedBack 
      completionHandler:^(AVAudioPlayerNodeCompletionCallbackType callbackType) { 
    // do something here 
}]; 

documentation並沒有說明它是如何工作什麼,但我測試,它適用於我。

我一直在使用這種解決方法適用於iOS 8-10:

- (void)playRecording { 
    [self.audioUnitPlayer scheduleSegment:self.audioUnitFile startingFrame:sampleTime frameCount:(int)sampleLength atTime:0 completionHandler:^() { 
     float totalTime = [self recordingDuration]; 
     float elapsedTime = [self recordingCurrentTime]; 
     float remainingTime = totalTime - elapsedTime; 
     [self performSelector:@selector(doSomethingHere) withObject:nil afterDelay:remainingTime]; 
    }]; 
} 

- (float)recordingDuration { 
    float duration = duration = self.audioUnitFile.length/self.audioUnitFile.processingFormat.sampleRate; 
    if (isnan(duration)) { 
     duration = 0; 
    } 
    return duration; 
} 

- (float)recordingCurrentTime { 
    AVAudioTime *nodeTime = self.audioUnitPlayer.lastRenderTime; 
    AVAudioTime *playerTime = [self.audioUnitPlayer playerTimeForNodeTime:nodeTime]; 
    AVAudioFramePosition sampleTime = playerTime.sampleTime; 
    if (sampleTime == 0) { return self.audioUnitLastKnownTime; } // this happens when the player isn't playing 
    sampleTime += self.audioUnitStartingFrame; // if we trimmed from the start, or changed the location with the location slider, the time before that point won't be included in the player time, so we have to track it ourselves and add it here 
    float time = sampleTime/self.audioUnitFile.processingFormat.sampleRate; 
    self.audioUnitLastKnownTime = time; 
    return time; 
} 
0
// audioFile here is our original audio 

audioPlayerNode.scheduleFile(audioFile, at: nil, completionHandler: { 
     print("scheduleFile Complete") 

     var delayInSeconds: Double = 0 

     if let lastRenderTime = self.audioPlayerNode.lastRenderTime, let playerTime = self.audioPlayerNode.playerTime(forNodeTime: lastRenderTime) { 

      if let rate = rate { 
       delayInSeconds = Double(audioFile.length - playerTime.sampleTime)/Double(audioFile.processingFormat.sampleRate)/Double(rate!) 
      } else { 
       delayInSeconds = Double(audioFile.length - playerTime.sampleTime)/Double(audioFile.processingFormat.sampleRate) 
      } 
     } 

     // schedule a stop timer for when audio finishes playing 
     DispatchTime.executeAfter(seconds: delayInSeconds) { 
      audioEngine.mainMixerNode.removeTap(onBus: 0) 
      // Playback has completed 
     } 

    })