2016-12-27 16 views
0

如何檢測語音聽寫是否用於UITextView?或麥克風按鈕在UI的TextViewios:如何檢測語音聽寫是否用於UITextView?或在鍵盤上點擊麥克風按鈕

enter image description here

+0

[ios:如何檢測語音聽寫是否用於UITextField?或麥克風按鈕被輕敲在鍵盤上](http://stackoverflow.com/questions/32652775/ios-how-to-detect-if-voice-dictation-was-used-for-uitextfield-or-microphone-bu) – Saavaj

+0

@ Saavaj,請你詳細的看我的問題,我提到UITextview不是UITextField.UITextView是不同於UITextField – Nithya

回答

0

敲擊鍵盤上您可以使用語音工具框架,使用Siri的語音識別。 先導入語音工具框架然後確認一個委託 這裏是快捷版本。它可能有助於

import Speech 

    class ViewController: UIViewController, SFSpeechRecognizerDelegate { 

    private var recognitionRequest: SFSpeechAudioBufferRecognitionRequest? 
    private var recognitionTask: SFSpeechRecognitionTask? 
    private let audioEngine = AVAudioEngine() 

    private let speechRecognizer = SFSpeechRecognizer(locale: Locale.init(identifier: "en-US")) 

     override func viewDidLoad() { 
     super.viewDidLoad() 
     self.authorizeSpeech() 
     } 

     private func authorizeSpeech() { 
     SFSpeechRecognizer.requestAuthorization { (authStatus) in //4 

     var isButtonEnabled = false 

     switch authStatus { //5 
      case .authorized: 
      isButtonEnabled = true 

      case .denied: 
      isButtonEnabled = false 
      print("User denied access to speech recognition") 

      case .restricted: 
      isButtonEnabled = false 
      print("Speech recognition restricted on this device") 

     case .notDetermined: 
      isButtonEnabled = false 
      print("Speech recognition not yet authorized") 
     } 

     OperationQueue.main.addOperation() { 
      print(isButtonEnabled) //this tells that speech authorized or not 
     } 
     } 
     } 


    } 

現在在你添加一些自定義消息的info.plist

<key>NSMicrophoneUsageDescription</key> <string>Your microphone will be used to record your speech when you press the Start Recording button.</string> 

    <key>NSSpeechRecognitionUsageDescription</key> <string>Speech recognition will be used to determine which words you speak into this device microphone.</string> 

現在創建)

func startRecording() { 

if recognitionTask != nil { 
    recognitionTask?.cancel() 
    recognitionTask = nil 
} 

let audioSession = AVAudioSession.sharedInstance() 
do { 
    try audioSession.setCategory(AVAudioSessionCategoryRecord) 
    try audioSession.setMode(AVAudioSessionModeMeasurement) 
    try audioSession.setActive(true, with: .notifyOthersOnDeactivation) 
} catch { 
    print("audioSession properties weren't set because of an error.") 
} 

recognitionRequest = SFSpeechAudioBufferRecognitionRequest() 

guard let inputNode = audioEngine.inputNode else { 
    fatalError("Audio engine has no input node") 
} 

guard let recognitionRequest = recognitionRequest else { 
    fatalError("Unable to create an SFSpeechAudioBufferRecognitionRequest object") 
} 

recognitionRequest.shouldReportPartialResults = true 

recognitionTask = speechRecognizer.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in 

    var isFinal = false 

    if result != nil { 

     your_text_view.text = result?.bestTranscription.formattedString 
     isFinal = (result?.isFinal)! 
    } 

    if error != nil || isFinal { 
     self.audioEngine.stop() 
     inputNode.removeTap(onBus: 0) 

     self.recognitionRequest = nil 
     self.recognitionTask = nil 


    } 
}) 

let recordingFormat = inputNode.outputFormat(forBus: 0) 
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in 
    self.recognitionRequest?.append(buffer) 
} 

audioEngine.prepare() 

do { 
    try audioEngine.start() 
} catch { 
    print("audioEngine couldn't start because of an error.") 
} 

} 

確認委託

所謂的startRecording(新功能
func speechRecognizer(_ speechRecognizer: SFSpeechRecognizer, availabilityDidChange available: Bool) { 
    if available { 
     startRecording() 
    } else { 
     //print("not implement") 
    } 
    } 
+0

@thanks Umesh Verma,我想我的客觀的C版本,請你分享它對我來說是非常有用的 – Nithya

相關問題