2017-07-18 62 views
0

我試圖用SpeechSynthesis API製作一個Web應用程序,在點擊開始按鈕後運行我的程序,然後開始在我的Android和iOS設備上收聽用戶。用戶可以說任何事情來運行程序。之後,我可以每三秒鐘播放音頻文件。以下是我的代碼到目前爲止。我的邏輯錯了嗎?點擊並聽到任何聲音後,我無法啓動程序。開始在我的Android和Safari設備上運行SpeechSynthesis API

另一個問題是,這個SpeechSynthesis API可以支持Android和iOS設備,但是當我看到一些事件如'soundstart event'時,它不支持Safari Mobile。他們的關係是什麼?我非常困惑。 SpeechRecognition API僅支持Chrome瀏覽器,但是我不需要像soundstart那樣使用某些事件嗎?

非常感謝您的幫助。對此,我真的非常感激。

<p id="msg" align="center"></p> 

    <script> 
     var utterance = new SpeechSynthesisUtterance("Hello"); 
     //window.speechSynthesis.speak(utterance); 

     var supportMsg = document.getElementById('msg'); 

     if ('speechSynthesis' in window) 
     { 
      supportMsg.innerHTML = 'Your browser <strong>supports</strong> speech synthesis.'; 
      console.log("Hi"); 

      utterance.onstart = function(event) 
      { 
       console.log('Hhhh') 
      }; 


      var playList = ["1_hello", "2_how_old", "3_what_did_you_make"]; 
      var dir = "sound/"; 
      var extention = ".wav"; 
      audio.src = dir + playList[audioIndex] + extention; 
      audio.load(); 

      var audioIndex = 0; 
      setTimeout(function(){audio.play();}, 1000); 


      window.speechSynthesis.speak(utterance); 

     } 
     else 
     { 



      supportMsg.innerHTML = 'Sorry your browser <strong>does not support</strong> speech synthesis.<br>Try this in <a href="https://www.google.co.uk/intl/en/chrome/browser/canary.html">Chrome Canary</a>.'; 
     } 

     //window.speechSynthesis(utterance); 

    </script> 
    <div class="container"> 
     <button id="runProgram" onclick='utterance.onstart();' 
     class="runProgram-button">Start</button> 
    </div> 

回答

0

這是否適合您?

function playAudio() { 
 
    var msg = new SpeechSynthesisUtterance('Help me with this code please?'); 
 
    msg.pitch = 0; 
 
    msg.rate = .6; 
 
    window.speechSynthesis.speak(msg); 
 

 

 

 
    var msg = new SpeechSynthesisUtterance(); 
 
    var voices = window.speechSynthesis.getVoices(); 
 
    msg.voice = voices[10]; // Note: some voices don't support altering params 
 
    msg.voiceURI = 'native'; 
 
    msg.volume = 1; // 0 to 1 
 
    msg.rate = 1.2; // 0.1 to 10 
 
    msg.pitch = 2; //0 to 2 
 
    msg.text = 'Sure. This code plays "Hello World" for you'; 
 
    msg.lang = 'en-US'; 
 

 
    msg.onend = function(e) { 
 
    var msg1 = new SpeechSynthesisUtterance('Now code plays audios for you'); 
 
    msg1.voice = speechSynthesis.getVoices().filter(function(voice) { return voice.name == 'Whisper'; })[0]; 
 
    msg1.pitch = 2; //0 to 2 
 
    msg1.rate= 1.2; //0 to 2 
 
    // start your audio loop. 
 
    speechSynthesis.speak(msg1); 
 
    console.log('Finished in ' + e.elapsedTime + ' seconds.'); 
 
    }; 
 

 
    speechSynthesis.speak(msg); 
 
}
<div class="container"> 
 
    <button id="runProgram" onclick='playAudio();' class="runProgram-button">Start</button> 
 
</div>

+0

我是這麼認爲的,即使這聽起來很可笑。這是一個很好的提示。非常感謝Alex。我會盡量讓這個API像我們爲annyang所做的那樣工作。你認爲我可以做onClick ='window.speechSynthesis.soundstart()'嗎?另外,SpeechSynthesis和SpeechRecognition API之間的關係是什麼? Cuz SpeechSynthesis可以支持Android和safari,但SpeechRecognition只能支持Chrome。 – Johnny

+0

編號'SoundStart'是IT開始講話時觸發的事件,如IT完成時觸發「onend」。它們不是直接觸發的方法。 'speechSynthesis.speak'會在開始說話時觸發'SoundStart'。草稿狀態下的兩個API都是**實驗**。谷歌只是在其他方面有所提高。 –

+0

感謝您的回覆,亞歷山大。我懂了。我想知道是否可以調整我的音頻文件,如音調或比率,而不是使用IT來說話。 – Johnny

相關問題