2015-08-17 50 views
0

我正在嘗試使用IBM的websocket實現他們的語音到文本服務。目前我無法弄清楚如何通過連接發送.wav文件。我知道我需要把它變成一個blob,但我不知道該怎麼做。的現在,我得到錯誤:如何在node.js中創建一個blob用於websocket?

You must pass a Node Buffer object to WebSocketConnec

- 或 -

Could not read a WAV header from a stream of 0 bytes

...取決於我試圖傳遞給服務。應該指出的是,我正確地發送了開始消息並且正在使其進入聆聽狀態。

+0

您能否在這裏添加一段代碼導致您在上面提到的錯誤? –

回答

1

從v1.0開始(仍然在betawatson-developer-cloud npm模塊支持websockets。

npm install [email protected] 

認識到自己的wav文件:

var watson = require('watson-developer-cloud'); 
var fs = require('fs'); 

var speech_to_text = watson.speech_to_text({ 
    username: 'INSERT YOUR USERNAME FOR THE SERVICE HERE', 
    password: 'INSERT YOUR PASSWORD FOR THE SERVICE HERE', 
    version: 'v1', 
}); 


// create the stream 
var recognizeStream = speech_to_text.createRecognizeStream({ content_type: 'audio/wav' }); 

// pipe in some audio 
fs.createReadStream('audio-to-recognize.wav').pipe(recognizeStream); 

// and pipe out the transcription 
recognizeStream.pipe(fs.createWriteStream('transcription.txt')); 


// listen for 'data' events for just the final text 
// listen for 'results' events to get the raw JSON with interim results, timings, etc. 

recognizeStream.setEncoding('utf8'); // to get strings instead of Buffers from `data` events 

['data', 'results', 'error', 'connection-close'].forEach(function(eventName) { 
    recognizeStream.on(eventName, console.log.bind(console, eventName + ' event: ')); 
}); 

查看更多例子here

相關問題