2011-07-18 44 views
1

我正在兩臺設備之間傳輸麥克風音頻,一切正常,但我有一個不好的回聲。AudioRecord和AudioTrack回聲

這裏我在做什麼

讀線程

int sampleFreq = 22050; 
     int channelConfig = AudioFormat.CHANNEL_CONFIGURATION_MONO; 
     int audioFormat = AudioFormat.ENCODING_PCM_16BIT; 
     int minBuffer = 2*AudioTrack.getMinBufferSize(sampleFreq, channelConfig, audioFormat); 

     AudioTrack atrack = new AudioTrack(AudioManager.STREAM_MUSIC, 
         sampleFreq, 
         channelConfig, 
         audioFormat, 
         minBuffer, 
         AudioTrack.MODE_STREAM); 
     atrack.play(); 

     byte[] buffer = new byte[minBuffer]; 
     while (true) { 
      try {     
       // Read from the InputStream     
       bytes = mmInStream.read(buffer);      
       atrack.write(buffer, 0, buffer.length); 
       atrack.flush();       
      } catch (IOException e) { 
       Log.e(TAG, "disconnected", e); 
       break; 
      } 
     } 

這裏記錄線程

int sampleRate = 22050; 
    int channelMode = AudioFormat.CHANNEL_CONFIGURATION_MONO; 
    int audioFormat = AudioFormat.ENCODING_PCM_16BIT; 
    int buffersize = 2*AudioTrack.getMinBufferSize(sampleRate, channelMode, audioFormat); 

    AudioRecord arec = new AudioRecord(MediaRecorder.AudioSource.MIC, 
      sampleRate, channelMode, 
      AudioFormat.ENCODING_PCM_16BIT, buffersize); 

    buffer = new byte[buffersize]; 
    arec.startRecording(); 

    while (true) { 
     arec.read(buffer, 0, buffersize); 
     new Thread(new Runnable(){ 
      @Override 
      public void run() { 
       try {    
        mOutputStream.write(buffer); 
       } catch (IOException e) { 
        // TODO Auto-generated catch block 
        e.printStackTrace(); 
       }     
      }    
     }).start(); 
    } 

難道我做錯了什麼?

回答

3

您需要回聲消除邏輯。這是我在我的Arm5(WM8650)處理器(Android 2.2)上所做的刪除回聲。

  1. 我用JNI包裝了Speex,並在發送PCM幀到編碼器之前調用了回聲處理例程。無論我嘗試過哪些Speex設置,都不會取消回聲。

  2. 由於Speex對播放和回聲幀之間的延遲非常敏感,我實現了一個隊列並將發送到AudioTrack的所有數據包排隊。隊列的大小應該大致等於內部AudioTrack緩衝區的大小。大致在AudioTrack從其內部緩衝區向聲卡發送數據包時,這種方式發送到echo_playback。這種方法消除了延遲,但回聲仍未取消

  3. 我用JNI封裝了WebRtc回聲消除部分,並在將數據包發送到編碼器之前調用了它的方法。回聲仍然存在,但圖書館顯然試圖取消它。

  4. 我應用了P2中描述的緩衝技術,它終於開始工作。雖然需要爲每個設備調整延遲。另請注意,WebRtc具有移動和完整版本的迴音消除功能。完整版顯着降低了處理器速度,應該只能在ARM7上運行。移動版本工作,但質量較差

我希望這會幫助某人。

+0

你能否詳細說一下第二點...... – aProgrammer

2

可能是這樣的:

bytes = mmInStream.read(buffer);      
atrack.write(buffer, 0, buffer.length); 

如果緩衝區仍從以前的調用全和新的不完整的(所以bytes < buffer.length)跟蹤你重新上場保持部分。

0

這是我的代碼

public class AudioCall { 

private static final String LOG_TAG = "AudioCall"; 
private static final int SAMPLE_RATE = 8000; // Hertz 
private static final int SAMPLE_INTERVAL = 20; // Milliseconds 
private static final int SAMPLE_SIZE = 2; // Bytes 
private static final int BUF_SIZE = SAMPLE_INTERVAL * SAMPLE_INTERVAL * SAMPLE_SIZE * 2; //Bytes 
private InetAddress address; // Address to call 
private int port = 50000; // Port the packets are addressed to 
private boolean mic = false; // Enable mic? 
private boolean speakers = false; // Enable speakers? 

public AudioCall(InetAddress address) { 

    this.address = address; 
} 

public void startCall() { 

    startMic(); 
    startSpeakers(); 
} 

public void endCall() { 

    Log.i(LOG_TAG, "Ending call!"); 
    muteMic(); 
    muteSpeakers(); 
} 

public void muteMic() { 

    mic = false; 
} 

public void muteSpeakers() { 

    speakers = false; 
} 

public void startMic() { 
    // Creates the thread for capturing and transmitting audio 
    mic = true; 
    Thread thread = new Thread(new Runnable() { 

     @Override 
     public void run() { 
      // Create an instance of the AudioRecord class 
      AudioRecord audioRecorder = new AudioRecord (MediaRecorder.AudioSource.MIC, SAMPLE_RATE, 
        AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, 
        AudioRecord.getMinBufferSize(SAMPLE_RATE, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT)*10); 
      int bytes_read = 0; 
      int bytes_sent = 0; 
      byte[] buf = new byte[BUF_SIZE]; 
      try { 
       // Create a socket and start recording 
       Log.i(LOG_TAG, "Packet destination: " + address.toString()); 
       DatagramSocket socket = new DatagramSocket(); 
       audioRecorder.startRecording(); 
       while(mic) { 
        // Capture audio from the mic and transmit it 
        bytes_read = audioRecorder.read(buf, 0, BUF_SIZE); 
        DatagramPacket packet = new DatagramPacket(buf, bytes_read, address, port); 
        socket.send(packet); 
        bytes_sent += bytes_read; 
        Log.i(LOG_TAG, "Total bytes sent: " + bytes_sent); 
        Thread.sleep(SAMPLE_INTERVAL, 0); 
       } 
       // Stop recording and release resources 
       audioRecorder.stop(); 
       audioRecorder.release(); 
       socket.disconnect(); 
       socket.close(); 
       mic = false; 
       return; 
      } 
      catch(InterruptedException e) { 

       Log.e(LOG_TAG, "InterruptedException: " + e.toString()); 
       mic = false; 
      } 
      catch(SocketException e) { 

       Log.e(LOG_TAG, "SocketException: " + e.toString()); 
       mic = false; 
      } 
      catch(UnknownHostException e) { 

       Log.e(LOG_TAG, "UnknownHostException: " + e.toString()); 
       mic = false; 
      } 
      catch(IOException e) { 

       Log.e(LOG_TAG, "IOException: " + e.toString()); 
       mic = false; 
      } 
     } 
    }); 
    thread.start(); 
} 

public void startSpeakers() { 
    // Creates the thread for receiving and playing back audio 
    if(!speakers) { 

     speakers = true; 
     Thread receiveThread = new Thread(new Runnable() { 

      @Override 
      public void run() { 
       // Create an instance of AudioTrack, used for playing back audio 
       Log.i(LOG_TAG, "Receive thread started. Thread id: " + Thread.currentThread().getId()); 
       AudioTrack track = new AudioTrack(AudioManager.STREAM_MUSIC, SAMPLE_RATE, AudioFormat.CHANNEL_OUT_MONO, 
         AudioFormat.ENCODING_PCM_16BIT, BUF_SIZE, AudioTrack.MODE_STREAM); 
       track.play(); 
       try { 
        // Define a socket to receive the audio 
        DatagramSocket socket = new DatagramSocket(port); 
        byte[] buf = new byte[BUF_SIZE]; 
        while(speakers) { 
         // Play back the audio received from packets 
         DatagramPacket packet = new DatagramPacket(buf, BUF_SIZE); 
         socket.receive(packet); 
         Log.i(LOG_TAG, "Packet received: " + packet.getLength()); 
         track.write(packet.getData(), 0, BUF_SIZE); 
        } 
        // Stop playing back and release resources 
        socket.disconnect(); 
        socket.close(); 
        track.stop(); 
        track.flush(); 
        track.release(); 
        speakers = false; 
        return; 
       } 
       catch(SocketException e) { 

        Log.e(LOG_TAG, "SocketException: " + e.toString()); 
        speakers = false; 
       } 
       catch(IOException e) { 

        Log.e(LOG_TAG, "IOException: " + e.toString()); 
        speakers = false; 
       } 
      } 
     }); 
     receiveThread.start(); 
    } 
    } 
} 

我也有同樣的問題,因爲相互連接的兩款Android設備。 設備開始製作回聲,如果您分享一些鏈接來解決問題,這將非常有幫助。

+0

這個問題沒有提供答案。您可以[搜索類似的問題](https://stackoverflow.com/search),或參考頁面右側的相關和鏈接問題來查找答案。如果你有一個相關但不同的問題,請[提出一個新問題](https://stackoverflow.com/questions/ask),幷包含一個鏈接以幫助提供上下文。請參閱:[提問,獲得答案,不要分心](https://stackoverflow.com/tour)。 – Dwhitz

相關問題