2016-02-09 89 views
0

我正在嘗試創建一個非常簡單的webm(vp8/opus)編碼器,但是我無法使音頻工作。libwebm(VP8/Opus)的非聲音視頻 - 同步音頻 -

ffprobe做檢測的文件格式和持續時間

Stream #1:0(eng): Audio: opus, 48000 Hz, mono, fltp (default)

VLC Media information dialog

視頻可以在VLC和Chrome中只打了罰款,但沒有音頻,由於某種原因,音頻input bitrate是始終爲0

大部分音頻編碼代碼是從 https://github.com/fnordware/AdobeWebM/blob/master/src/premiere/WebM_Premiere_Export.cpp

下面是相關代碼:

static const long long kTimeScale = 1000000000LL; 

MkvWriter writer; 
writer.Open("video.webm"); 

Segment mux_seg; 
mux_seg.Init(&writer); 

// VPX encoding... 

int16_t pcm[SAMPLES]; 
uint64_t audio_track_id = mux_seg.AddAudioTrack(SAMPLE_RATE, 1, 0); 
mkvmuxer::AudioTrack *audioTrack = (mkvmuxer::AudioTrack*)mux_seg.GetTrackByNumber(audio_track_id); 
audioTrack->set_codec_id(mkvmuxer::Tracks::kOpusCodecId); 
audioTrack->set_seek_pre_roll(80000000); 
OpusEncoder *encoder = opus_encoder_create(SAMPLE_RATE, 1, OPUS_APPLICATION_AUDIO, NULL); 
opus_encoder_ctl(encoder, OPUS_SET_BITRATE(64000)); 
opus_int32 skip = 0; 
opus_encoder_ctl(encoder, OPUS_GET_LOOKAHEAD(&skip)); 
audioTrack->set_codec_delay(skip * kTimeScale/SAMPLE_RATE); 
mux_seg.CuesTrack(audio_track_id); 
uint64_t currentAudioSample = 0; 
uint64_t opus_ts = 0; 
while(has_frame) { 
    int bytes = opus_encode(encoder, pcm, SAMPLES, out, SAMPLES * 8); 
    opus_ts = currentAudioSample * kTimeScale/SAMPLE_RATE; 
    mux_seg.AddFrame(out, bytes, audio_track_id, opus_ts, true); 
    currentAudioSample += SAMPLES; 
} 

opus_encoder_destroy(encoder); 
mux_seg.Finalize(); 
writer.Close(); 

更新#1: 看來這個問題是需要WebM的音頻和視頻跟蹤,以交錯。 但是我無法弄清楚如何同步音頻。 我應該計算幀持續時間,然後編碼等效的音頻樣本?

回答

1

問題是我錯過了OGG標題數據,音頻幀時間戳不準確。

在這裏完成答案是編碼器的僞代碼。

const int kTicksPerSecond = 1000000000; // webm timescale 
const int kTimeScale = kTicksPerSecond/FPS; 
const int kTwoNanoSeconds = 1000000000; 

init_opus_encoder(); 
audioTrack->set_seek_pre_roll(80000000); 
audioTrack->set_codec_delay(opus_preskip); 
audioTrack->SetCodecPrivate(ogg_header_data, ogg_header_size); 

while(has_video_frame) { 
    encode_vpx_frame(); 
    video_pts = frame_index * kTimeScale; 
    muxer_segment.addFrame(frame_packet_data, packet_length, video_track_id, video_pts, packet_flags); 
    // fill the video frames gap with OPUS audio samples 
    while(audio_pts < video_pts + kTimeScale) { 
    encode_opus_frame(); 
    muxer_segment.addFrame(opus_frame_data, opus_frame_data_length, audio_track_id, audio_pts, true /* keyframe */); 
    audio_pts = curr_audio_samples * kTwoNanoSeconds/48000; 
    curr_audio_samples += 960; 
    } 
}