2015-10-15 142 views
1

背景:我找到了一個名爲「AVAudioEngine in Practice」的Apple WWDC會話,並試圖做出類似於43:35(https://youtu.be/FlMaxen2eyw?t=2614)所示的最後一個演示的類似內容。我使用的是SpriteKit而不是SceneKit,但原理是一樣的:我想生成球體,將它們扔到身邊,當它們發生碰撞時,發動聲音,對每個球體都是獨一無二的。有關使用AVAudioEngine的詳細信息

問題:

  • 我想連接到每個SpriteKitNode,這樣我可以發揮不同的聲音對每個球的唯一AudioPlayerNode。即現在,如果我爲每個AudioPlayerNode創建兩個球體並設置不同的音高,則只有最近創建的AudioPlayerNode似乎正在播放,即使原始球體發生碰撞。在演示期間,他提到「我正在綁定一名球員,一名專職球員到每一球」。我會怎麼做呢?

  • 每次發生新的碰撞時都會有聲音點擊/僞影。我假設這與AVAudioPlayerNodeBufferOptions和/或每次發生聯繫時我試圖創建,調度和使用緩衝區的事實有很大關係,這不是最有效的方法。這將是一個很好的解決方法?

代碼:由於在視頻中提到,「...對於每一個真實降生到這個世界上的球,一個新的球員節點也創造」。我有一個球一個單獨的類,有一個返回SpriteKitNode的方法,並創建一個AudioPlayerNode每次調用時間:

class Sphere { 

    var sphere: SKSpriteNode = SKSpriteNode(color: UIColor(), size: CGSize()) 
    var sphereScale: CGFloat = CGFloat(0.01) 
    var spherePlayer = AVAudioPlayerNode() 
    let audio = Audio() 
    let sphereCollision: UInt32 = 0x1 << 0 

    func createSphere(position: CGPoint, pitch: Float) -> SKSpriteNode { 

     let texture = SKTexture(imageNamed: "Slice") 
     let collisionTexture = SKTexture(imageNamed: "Collision") 

     // Define the node 

     sphere = SKSpriteNode(texture: texture, size: texture.size()) 

     sphere.position = position 
     sphere.name = "sphere" 
     sphere.physicsBody = SKPhysicsBody(texture: collisionTexture, size: sphere.size) 
     sphere.physicsBody?.dynamic = true 
     sphere.physicsBody?.mass = 0 
     sphere.physicsBody?.restitution = 0.5 
     sphere.physicsBody?.usesPreciseCollisionDetection = true 
     sphere.physicsBody?.categoryBitMask = sphereCollision 
     sphere.physicsBody?.contactTestBitMask = sphereCollision 
     sphere.zPosition = 1 

     // Create AudioPlayerNode 

     spherePlayer = audio.createPlayer(pitch) 

     return sphere  
    } 

這裏是我的音頻類與創建AudioPCMBuffers和AudioPlayerNodes

class Audio { 

let engine: AVAudioEngine = AVAudioEngine() 

func createBuffer(name: String, type: String) -> AVAudioPCMBuffer { 

    let audioFilePath = NSBundle.mainBundle().URLForResource(name as String, withExtension: type as String)! 
    let audioFile = try! AVAudioFile(forReading: audioFilePath) 
    let buffer = AVAudioPCMBuffer(PCMFormat: audioFile.processingFormat, frameCapacity: UInt32(audioFile.length)) 
    try! audioFile.readIntoBuffer(buffer) 

    return buffer 
} 

func createPlayer(pitch: Float) -> AVAudioPlayerNode { 

    let player = AVAudioPlayerNode() 
    let buffer = self.createBuffer("PianoC1", type: "wav") 
    let pitcher = AVAudioUnitTimePitch() 
    let delay = AVAudioUnitDelay() 
    pitcher.pitch = pitch 
    delay.delayTime = 0.2 
    delay.feedback = 90 
    delay.wetDryMix = 0 

    engine.attachNode(pitcher) 
    engine.attachNode(player) 
    engine.attachNode(delay) 

    engine.connect(player, to: pitcher, format: buffer.format) 
    engine.connect(pitcher, to: delay, format: buffer.format) 
    engine.connect(delay, to: engine.mainMixerNode, format: buffer.format) 

    engine.prepare() 
    try! engine.start() 

    return player 
} 
} 

在我GameScene類,然後我測試碰撞,安排一個緩衝並播放AudioPlayerNode如果接觸發生

func didBeginContact(contact: SKPhysicsContact) { 

     let firstBody: SKPhysicsBody = contact.bodyA 

     if (firstBody.categoryBitMask & sphere.sphereCollision != 0) { 

     let buffer1 = audio.createBuffer("PianoC1", type: "wav") 
     sphere.spherePlayer.scheduleBuffer(buffer1, atTime: nil, options: AVAudioPlayerNodeBufferOptions.Interrupts, completionHandler: nil) 
     sphere.spherePlayer.play() 

     } 
} 

我是新至S wift,只有基本的編程知識,所以任何建議/批評都是受歡迎的。

回答

2

我一直在努力AVAudioEngine在scenekit和嘗試做別的事情,但是這將是你在找什麼:

https://developer.apple.com/library/mac/samplecode/AVAEGamingExample/Listings/AVAEGamingExample_AudioEngine_m.html

它解釋的過程: 1,您的實例化自己AVAudioEngine子類 2的方法來加載PCMBuffers每個AVAudioPlayer 3更改您的環境節點的參數爲大量彈球的對象

編輯以適應混響:翻新,測試並添加了一些功能:

1-您創建AVAudioEngine的子類,將其命名爲AudioLayerEngine。這是爲了訪問AVAudioUnit的效果,如失真,延遲,音高和許多其他音頻單元效果。 2-初始化通過設置音頻引擎的一些配置,比如渲染算法,暴露AVAudioEnvironmentNode以使用您的SCNNode對象或SKNode對象的3D位置進行播放(如果您處於2D但想要3D效果) 3-創建一些輔助方法爲所需的每個AudioUnit效果加載預設值 4 - 創建一個幫助程序方法來創建音頻播放器,然後將其添加到所需的任意節點,因爲該SCNNode接受返回[AVAudioPlayer]的.audioPlayers方法或[SCNAudioPlayer] 5-開始播放。

我已經粘貼了整個類以供參考,以便您可以隨意構造它,但請記住,如果將它與SceneKit或SpriteKit結合使用,則可以使用此audioEngine來管理所有聲音,而不是SceneKit的內部AVAudioEngine。這意味着你的AwakeFromNib方法

import Foundation 
import SceneKit 
import AVFoundation 

class AudioLayerEngine:AVAudioEngine{ 
    var engine:AVAudioEngine! 
    var environment:AVAudioEnvironmentNode! 
    var outputBuffer:AVAudioPCMBuffer! 
    var voicePlayer:AVAudioPlayerNode! 
    var multiChannelEnabled:Bool! 
    //audio effects 
    let delay = AVAudioUnitDelay() 
    let distortion = AVAudioUnitDistortion() 
    let reverb = AVAudioUnitReverb() 

    override init(){ 
     super.init() 
engine = AVAudioEngine() 
environment = AVAudioEnvironmentNode() 

engine.attachNode(self.environment) 
voicePlayer = AVAudioPlayerNode() 
engine.attachNode(voicePlayer) 
voicePlayer.volume = 1.0 
     outputBuffer = loadVoice() 
     wireEngine() 
     startEngine() 
voicePlayer.scheduleBuffer(self.outputBuffer, completionHandler: nil) 
voicePlayer.play() 
    } 

    func startEngine(){ 
     do{ 
      try engine.start() 
     }catch{ 
      print("error loading engine") 
     } 
    } 

    func loadVoice()->AVAudioPCMBuffer{ 
     let URL = NSURL(fileURLWithPath: NSBundle.mainBundle().pathForResource("art.scnassets/sounds/interface/test", ofType: "aiff")!) 
     do{ 
      let soundFile = try AVAudioFile(forReading: URL, commonFormat: AVAudioCommonFormat.PCMFormatFloat32, interleaved: false) 
      outputBuffer = AVAudioPCMBuffer(PCMFormat: soundFile.processingFormat, frameCapacity: AVAudioFrameCount(soundFile.length)) 
      do{ 
      try soundFile.readIntoBuffer(outputBuffer) 
      }catch{ 
       print("somethign went wrong with loading the buffer into the sound fiel") 
      } 
      print("returning buffer") 
      return outputBuffer 
     }catch{ 
     } 
     return outputBuffer 
    } 

    func wireEngine(){ 
loadDistortionPreset(AVAudioUnitDistortionPreset.MultiCellphoneConcert) 
     engine.attachNode(distortion) 
     engine.attachNode(delay) 
engine.connect(voicePlayer, to: distortion, format: self.outputBuffer.format) 
     engine.connect(distortion, to: delay, format: self.outputBuffer.format) 
       engine.connect(delay, to: environment, format: self.outputBuffer.format) 
     engine.connect(environment, to: engine.outputNode, format: constructOutputFormatForEnvironment()) 

    } 

    func constructOutputFormatForEnvironment()->AVAudioFormat{ 
let outputChannelCount = self.engine.outputNode.outputFormatForBus(1).channelCount 
let hardwareSampleRate = self.engine.outputNode.outputFormatForBus(1).sampleRate 
let environmentOutputConnectionFormat = AVAudioFormat(standardFormatWithSampleRate: hardwareSampleRate, channels: outputChannelCount) 
multiChannelEnabled = false 
     return environmentOutputConnectionFormat 
    } 

    func loadDistortionPreset(preset: AVAudioUnitDistortionPreset){ 
     distortion.loadFactoryPreset(preset) 
} 

    func createPlayer(node: SCNNode){ 
     let player = AVAudioPlayerNode() 
distortion.loadFactoryPreset(AVAudioUnitDistortionPreset.SpeechCosmicInterference) 
engine.attachNode(player) 
engine.attachNode(distortion) 
engine.connect(player, to: distortion, format: outputBuffer.format) 
     engine.connect(distortion, to: environment, format: constructOutputFormatForEnvironment()) 
let algo = AVAudio3DMixingRenderingAlgorithm.HRTF 
     player.renderingAlgorithm = algo 
     player.reverbBlend = 0.3 
     player.renderingAlgorithm = AVAudio3DMixingRenderingAlgorithm.HRTF 
    } 

} 
+0

雖然此鏈接可以回答這個問題,最好是在這裏有答案的主要部件,並提供鏈接以供參考時在gameView實例化這個。如果鏈接頁面更改,則僅鏈接答案可能會失效。 - [來自評論](/ review/low-quality-posts/11350414) –

+2

@BeauNouvelle我已經編輯完整的測試代碼和一個額外的功能 – triple7