在IBM開發人員論壇中閱讀了一篇文章,意思是android sdk從mic錄音中讀取字節並將其寫入websocket。我現在試圖從內存上的音頻文件讀取字節,並將它們寫入websocket。我應該怎麼做?到目前爲止,我有:使用IBM Watson將音頻文件寫入websocket Android sdk
public class AudioCaptureThread extends Thread{
private static final String TAG = "AudioCaptureThread";
private boolean mStop = false;
private boolean mStopped = false;
private int mSamplingRate = -1;
private IAudioConsumer mIAudioConsumer = null;
// the thread receives high priority because it needs to do real time audio capture
// THREAD_PRIORITY_URGENT_AUDIO = "Standard priority of the most important audio threads"
public AudioCaptureThread(int iSamplingRate, IAudioConsumer IAudioConsumer) {
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
mSamplingRate = iSamplingRate;
mIAudioConsumer = IAudioConsumer;
}
// once the thread is started it runs nonstop until it is stopped from the outside
@Override
public void run() {
File path = Activity.getContext.getExternalFilesDir(null);
File file = new File (path, "whatstheweatherlike.wav");
int length = (int) file.length();
ByteArrayOutputStream bos = new ByteArrayOutputStream();
byte[] b = new byte[length];
FileInputStream in = null;
try {
in = new FileInputStream(file);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
try {
for (int readNum; (readNum = in.read(b)) != -1;) {
bos.write(b, 0, readNum);
}
} catch (IOException e) {
e.printStackTrace();
}
byte[] bytes = bos.toByteArray();
mIAudioConsumer.consume(bytes);
}
然而,Activity.getContext無法識別。我可以將文件轉換爲MainActivity中的字節,但是如何將它寫入websocket?我在正確的軌道上還是這不正確?如果是這樣,我該如何解決這個問題?
任何幫助表示讚賞!
非常感謝!是的,我使用的是android sdk的原始AudioCaptureThread類(https://github.com/watson-developer-cloud/speech-android-sdk/blob/master/speech-android-wrapper/src/main/java/com /ibm/watson/developer_cloud/android/speech_to_text/v1/audio/AudioCaptureThread.java)作爲參考。在構造函數中傳遞上下文作爲參數做了工作。 – skbrhmn