0
我有一個網頁,按照確定的順序加載3個不同的音頻文件(每個1秒tho),然後合併到一個音頻緩衝區中一個接一個)使用XMLHttpRequest加載多個音頻文件,並將它們附加到Web Audio API中播放
爲了證明什麼,我想做的事,這是示例代碼片段:
var AudioContext = window.AudioContext || window.webkitAudioContext;
var audioContextApp = new AudioContext();
var buffer1 = audioContextApp.createBufferSource();
var buffer2 = audioContextApp.createBufferSource();
var buffer3 = audioContextApp.createBufferSource();
var request1 = new XMLHttpRequest;
request1.open('GET', URL_FIRST_SOUND, true);
request1.onload = function() {
var undecodedAudio = request1.response;
audioContextApp.decodeAudioData(undecodedAudio, function(buffer) {
buffer1.buffer = buffer;
});
}
request1.load()
// DO the same thing with request2, request3 to load second and third sound.
現在我不知道如何正確地添加這3個音頻緩衝融入其中讓用戶播放合併的音頻。