2016-12-19 41 views
0

我正在開發一個簡單的Beatbox應用程序。首先我用純Java編寫了所有東西,然後我找到了一個很棒的tarsosdsp框架。但是現在我遇到了一個我無法解決的問題。你可以幫我嗎?嘗試填充audioBuffer時使用tarsosdsp audioEvent-Buffer加擾的噪音

我正在設置一個SilenceDetector - 這非常棒。然後我想用過程方法中的audioEvent的數據填充一個byte []緩衝區。在那裏我失敗了......變量audioBuffer的類型是ByteArrayOutputStream,並在運行時重用。請參閱相關代碼片段:

private void setNewMixer(Mixer mixer) throws LineUnavailableException, 
UnsupportedAudioFileException { 

    if(dispatcher!= null){ 
     dispatcher.stop(); 
    } 
    currentMixer = mixer; 

    //final AudioFormat format = new AudioFormat(sampleRate, frameRate, channel, true, true); 
    final DataLine.Info dataLineInfo = new DataLine.Info(TargetDataLine.class, audioFormat); 
    final TargetDataLine line; 
    line = (TargetDataLine) mixer.getLine(dataLineInfo); 
    final int numberOfSamples = bufferSize; 
    line.open(audioFormat, numberOfSamples); 
    line.start(); 
    final AudioInputStream stream = new AudioInputStream(line); 

    JVMAudioInputStream audioStream = new JVMAudioInputStream(stream); 
    // create a new dispatcher 
    dispatcher = new AudioDispatcher(audioStream, bufferSize, overlap); 

    // add a processor, handle percussion event. 
    silenceDetector = new SilenceDetector(threshold,false); 

    dispatcher.addAudioProcessor(bufferFiller); 
    dispatcher.addAudioProcessor(silenceDetector); 
    dispatcher.addAudioProcessor(this); 

    // run the dispatcher (on a new thread). 
    new Thread(dispatcher,"GunNoiseDetector Thread").start(); 

} 

final AudioProcessor bufferFiller = new AudioProcessor() { 

    @Override 
    public boolean process(AudioEvent audioEvent) { 

     if(isAdjusting){   

       byte[] bb = audioEvent.getByteBuffer().clone(); 

       try { 
        audioBuffer.write(bb); 
       } catch (IOException e) { 
        // TODO Auto-generated catch block 
        e.printStackTrace(); 
       } 

       System.out.println("current buffer.size():: "+audioBuffer.size()); 

     } 
     else {   
      if (audioBuffer.size() > 0) { 
       try { 
        byte[] ba = audioBuffer.toByteArray(); 
        samples.add(ba); 
        System.out.println("stored: "+ba.length); 
        audioBuffer.flush(); 
        audioBuffer.close(); 
        audioBuffer = new ByteArrayOutputStream(); 
       } catch (IOException e) { 
        e.printStackTrace(); 
       } 
      }   
     } 

     return true; 
    } 

    @Override 
    public void processingFinished() { 
     // TODO Auto-generated method stub 
    } 

}; 

@Override 
public boolean process(AudioEvent audioEvent) { 
    if(silenceDetector.currentSPL() > threshold){   
     isAdjusting = true; 
     lastAction = System.currentTimeMillis();       
    } 
    else {     
     isAdjusting = false;    
    } 

    return true; 

} 

有什麼建議嗎?

回答

0

我發現它沒有工作的原因!像這裏提到的那樣:What is the meaning of frame rate in AudioFormat?

對於PCM,A律和μ律數據,幀是屬於一個採樣間隔的所有數據。這意味着幀速率與採樣率相同。

所以我的AudioFormat是錯誤的!

相關問題