2013-04-11 47 views
12

我正在嘗試製作一個簡單的Android應用程序,將實時麥克風音頻傳輸到服務器進行播放。由此產生的播放聽起來很奇怪,音頻中有很大的空白。有誰知道我做錯了什麼?通過UDP播放問題的Android AudioRecord到服務器

編輯:解決。事實證明,我假定每個傳入緩衝區都將完全滿滿,這是我的錯誤假設。

這裏是我的活動:

public class MainActivity extends Activity { 
    private static String TAG = "AudioClient"; 

    // the server information 
    private static final String SERVER = "xx.xx.xx.xx"; 
    private static final int PORT = 50005; 

    // the audio recording options 
    private static final int RECORDING_RATE = 44100; 
    private static final int CHANNEL = AudioFormat.CHANNEL_IN_MONO; 
    private static final int FORMAT = AudioFormat.ENCODING_PCM_16BIT; 

    // the button the user presses to send the audio stream to the server 
    private Button sendAudioButton; 

    // the audio recorder 
    private AudioRecord recorder; 

    // the minimum buffer size needed for audio recording 
    private static int BUFFER_SIZE = AudioRecord.getMinBufferSize(
      RECORDING_RATE, CHANNEL, FORMAT); 

    // are we currently sending audio data 
    private boolean currentlySendingAudio = false; 

    @Override 
    public void onCreate(Bundle savedInstanceState) { 
     super.onCreate(savedInstanceState); 
     setContentView(R.layout.activity_main); 

     Log.i(TAG, "Creating the Audio Client with minimum buffer of " 
       + BUFFER_SIZE + " bytes"); 

     // set up the button 
     sendAudioButton = (Button) findViewById(R.id.start_button); 
     sendAudioButton.setOnTouchListener(new OnTouchListener() { 

      @Override 
      public boolean onTouch(View v, MotionEvent event) { 

       switch (event.getAction()) { 

       case MotionEvent.ACTION_DOWN: 
        startStreamingAudio(); 
        break; 

       case MotionEvent.ACTION_UP: 
        stopStreamingAudio(); 
        break; 
       } 

       return false; 
      } 
     }); 
    } 

    private void startStreamingAudio() { 

     Log.i(TAG, "Starting the audio stream"); 
     currentlySendingAudio = true; 
     startStreaming(); 
    } 

    private void stopStreamingAudio() { 

     Log.i(TAG, "Stopping the audio stream"); 
     currentlySendingAudio = false; 
     recorder.release(); 
    } 

    private void startStreaming() { 

     Log.i(TAG, "Starting the background thread to stream the audio data"); 

     Thread streamThread = new Thread(new Runnable() { 

      @Override 
      public void run() { 
       try { 

        Log.d(TAG, "Creating the datagram socket"); 
        DatagramSocket socket = new DatagramSocket(); 

        Log.d(TAG, "Creating the buffer of size " + BUFFER_SIZE); 
        byte[] buffer = new byte[BUFFER_SIZE]; 

        Log.d(TAG, "Connecting to " + SERVER + ":" + PORT); 
        final InetAddress serverAddress = InetAddress 
          .getByName(SERVER); 
        Log.d(TAG, "Connected to " + SERVER + ":" + PORT); 

        Log.d(TAG, "Creating the reuseable DatagramPacket"); 
        DatagramPacket packet; 

        Log.d(TAG, "Creating the AudioRecord"); 
        recorder = new AudioRecord(MediaRecorder.AudioSource.MIC, 
          RECORDING_RATE, CHANNEL, FORMAT, BUFFER_SIZE * 10); 

        Log.d(TAG, "AudioRecord recording..."); 
        recorder.startRecording(); 

        while (currentlySendingAudio == true) { 

         // read the data into the buffer 
         int read = recorder.read(buffer, 0, buffer.length); 

         // place contents of buffer into the packet 
         packet = new DatagramPacket(buffer, read, 
          serverAddress, PORT); 

         // send the packet 
         socket.send(packet); 
        } 

        Log.d(TAG, "AudioRecord finished recording"); 

       } catch (Exception e) { 
        Log.e(TAG, "Exception: " + e); 
       } 
      } 
     }); 

     // start the thread 
     streamThread.start(); 
    } 
} 

這裏是我的服務器端代碼:

class Server { 

    AudioInputStream audioInputStream; 
    static AudioInputStream ais; 
    static AudioFormat format; 
    static boolean status = true; 
    static int port = 50005; 
    static int sampleRate = 11025; 
    static int bufferSize = 9728; 

    static Long lastTime; 
    static long totalBytesReceived = 0L; 

    private static final int audioStreamBufferSize = bufferSize * 20; 
    static byte[] audioStreamBuffer = new byte[audioStreamBufferSize]; 
    private static int audioStreamBufferIndex = 0; 

    public static void main(String args[]) throws Exception { 

     Log("Starting the AudioServer..."); 

     Log("Creating the datagram socket on port " + port + "..."); 
     DatagramSocket serverSocket = new DatagramSocket(null); 
     serverSocket.setReuseAddress(true); 
     serverSocket.bind(new InetSocketAddress(port)); 

     Log("Creating the buffer to hold the received data of size " 
       + bufferSize + "..."); 
     byte[] receiveData = new byte[bufferSize]; 

     Log("Setting the audio rate to " + sampleRate + "hz..."); 
     format = new AudioFormat(sampleRate, 16, 1, true, false); 

     Log("Ready to receive audio data"); 
     while (status == true) { 

      DatagramPacket receivePacket = new DatagramPacket(receiveData, 
        receiveData.length); 
      serverSocket.receive(receivePacket); 
      bufferAudioForPlayback(receivePacket.getData(), 
        receivePacket.getOffset(), receivePacket.getLength()); 
     } 

     serverSocket.close(); 
    } 

    private static void bufferAudioForPlayback(byte[] buffer, int offset, 
      int length) { 

     byte[] actualBytes = new byte[length]; 

     for (int i = 0; i < length; i++) { 
      actualBytes[i] = buffer[i]; 
     } 

     for (byte sample : actualBytes) { 

      int percentage = (int) (((double) audioStreamBufferIndex/(double) audioStreamBuffer.length) * 100.0); 
      Log("buffer is " + percentage + "% full"); 

      audioStreamBuffer[audioStreamBufferIndex] = sample; 
      audioStreamBufferIndex++; 
      Log("Buffer " + audioStreamBufferIndex + "/" 
        + audioStreamBuffer.length + " " + percentage); 

      if (audioStreamBufferIndex == audioStreamBuffer.length - 1) { 
       toSpeaker(audioStreamBuffer); 
       audioStreamBufferIndex = 0; 
       System.exit(0); 
      } 
     } 
    } 

    private static void Log(String log) { 
     System.out.println(log); 
    } 

    public static void toSpeaker(byte soundbytes[]) { 
     try { 

      DataLine.Info dataLineInfo = new DataLine.Info(
        SourceDataLine.class, format); 
      SourceDataLine sourceDataLine = (SourceDataLine) AudioSystem 
        .getLine(dataLineInfo); 

      sourceDataLine.open(format); 

      FloatControl volumeControl = (FloatControl) sourceDataLine 
        .getControl(FloatControl.Type.MASTER_GAIN); 
      volumeControl.setValue(100.0f); 

      sourceDataLine.start(); 
      sourceDataLine.open(format); 
      sourceDataLine.start(); 
      sourceDataLine.write(soundbytes, 0, soundbytes.length); 
      sourceDataLine.drain(); 
      sourceDataLine.close(); 
     } catch (Exception e) { 
      System.out.println("Error with audio playback: " + e); 
      e.printStackTrace(); 
     } 
    } 
} 

最後,這裏是主要的活動資源的XML文件:

<?xml version="1.0" encoding="utf-8"?> 
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" 
    android:orientation="vertical" 
    android:layout_width="fill_parent" 
    android:layout_height="fill_parent" 
    android:padding="20dip"> 

    <ImageView 
     android:layout_width="fill_parent" 
     android:layout_height="wrap_content" 
     android:src="@drawable/ic_launcher" 
     android:scaleType="fitCenter"/> 

     <TextView 
     android:layout_width="fill_parent" 
     android:layout_height="wrap_content" 
     android:text="@string/app_info" 
     android:layout_weight="1.0" 
     android:textSize="20dip"/> 

    <LinearLayout 
     android:orientation="horizontal" 
     android:layout_width="fill_parent" 
     android:layout_height="wrap_content"> 

     <Button 
       android:layout_width="wrap_content" 
       android:layout_height="wrap_content" 
       android:id="@+id/btnStart" 
       android:text="@string/start_recording" 
       android:layout_weight="1.0"/> 

       <Button 
       android:layout_width="wrap_content" 
       android:layout_height="wrap_content" 
       android:id="@+id/btnStop" 
       android:text="@string/stop_recording" 
       android:layout_weight="1.0"/> 
    </LinearLayout> 
</LinearLayout> 

編輯:這樣的回放音頻suh-suh-suh-suh-o-ou-ou-ou-nds-nds-ds。

+0

使用這種方法,我可以有一個以上的設備流有很大的幫助同一個端口?如果是這樣,我該如何區分服務器端的哪個流? – 2014-09-04 00:17:47

回答

7

這裏的東西,而不是你可以嘗試,:

// read the data into the buffer 
recorder.read(buffer, 0, buffer.length); 

// place contents of buffer into the packet 
packet = new DatagramPacket(buffer, buffer.length, serverAddress, PORT); 

不要指望你recorder收到完全讀緩存,但採用的實際讀值,而不是

// read the data into the buffer 
int read = recorder.read(buffer, 0, buffer.length); 

// place contents of buffer into the packet 
packet = new DatagramPacket(buffer, read, serverAddress, PORT); 

還是什麼的一致好評。

+0

我做了你所建議的改變,但音頻輸出仍然聽起來像機關槍。你可以知道這是正確的音頻,只是有差距。感謝您的反饋! – 2013-04-11 18:37:51

+2

此外,您也可能想要更改讀取數據包。類似的東西:bufferAudioForPlayback(receivePacket.getData(),receivePacket.getOffset(),receivePacket.getLength())。 – harism 2013-04-11 18:43:15

+1

謝謝!結合將採樣率改爲11025,我可以實現無間隙,清晰的播放。你太棒了,謝謝! – 2013-04-11 18:55:32

0

感謝後約書亞....新的蜜蜂:)

volumeControl.setValue(volumeControl.getMaximum()); 

removes illegalStateException at Server 

和權限的Android客戶端

<uses-permission android:name="android.permission.RECORD_AUDIO"/> 
+2

這些代碼片段如何與口吃回放問題有關? – MarsAtomic 2016-04-24 21:10:56