我正在從服務器傳輸H264 NAL,將它們作爲FLV標籤封裝,並使用appendBytes(數據生成模式)將它們傳遞到NetStream中。但是,當視頻正常播放時,流會延遲大約一秒鐘。在H264 NAL中使用NetStream.appendBytes時有沒有辦法阻止緩衝?
我試着設置bufferTime,bufferTimeMax但沒有運氣,以防止緩衝繼續。
我也嘗試了各種組合的NetStream.seek()和NetStream.appendBytesAction()與RESET_SEEK和END_SEQUENCE,再次無濟於事。
是否有一招我在這裏失蹤,有沒有辦法來防止這種延遲?
有趣的是,我沒有看到我傳入的音頻(PCMU)的延遲,所以我最終出現了脣形同步問題。
更新時間:仍然停留,所以張貼我使用的代碼:
var timestamp : uint = networkPayload.readUnsignedInt();
if (videoTimestampBase == 0) {
videoTimestampBase = timestamp;
}
timestamp = timestamp - videoTimestampBase;
timestamp = timestamp/90.0;
// skip 7 bytes of marker
networkPayload.position = 7;
var nalType : int = networkPayload.readByte();
nalType &= 0x1F;
networkPayload.position = 7;
// reformat Annex B bitstream encoding, to Mp4 - remove timestamp and bitstream marker (3 bytes)
var mp4Payload : ByteArray = new ByteArray();
var mp4PayloadLength : int = networkPayload.bytesAvailable;
mp4Payload.writeUnsignedInt(mp4PayloadLength);
mp4Payload.writeBytes(networkPayload, 7, mp4PayloadLength);
mp4Payload.position = 0;
if (nalType == 8) {
// PPS
ppsNAL = new ByteArray();
// special case for PPS/SPS - don't length encode
ppsLength = mp4Payload.bytesAvailable - 4;
ppsNAL.writeBytes(mp4Payload, 4, mp4Payload.bytesAvailable - 4);
if (spsNAL == null) {
return;
}
} else if (nalType == 7) {
// SPS
spsNAL = new ByteArray();
// special case for PPS/SPS - don't length encode
spsLength = mp4Payload.bytesAvailable - 4;
spsNAL.writeBytes(mp4Payload, 4, mp4Payload.bytesAvailable - 4);
if (ppsNAL == null) {
return;
}
}
if ((spsNAL != null) && (ppsNAL != null)) {
Log.debug(TAG, "Writing sequence header: " + spsLength + "," + ppsLength + "," + timestamp);
var sequenceHeaderTag : FLVTagVideo = new FLVTagVideo();
sequenceHeaderTag.codecID = FLVTagVideo.CODEC_ID_AVC;
sequenceHeaderTag.frameType = FLVTagVideo.FRAME_TYPE_KEYFRAME;
sequenceHeaderTag.timestamp = timestamp;
sequenceHeaderTag.avcPacketType = FLVTagVideo.AVC_PACKET_TYPE_SEQUENCE_HEADER;
spsNAL.position = 1;
var profile : int = spsNAL.readByte();
var compatibility : int = spsNAL.readByte();
var level : int = spsNAL.readByte();
Log.debug(TAG, profile + "," + compatibility + "," + level + "," + spsLength);
var avcc : ByteArray = new ByteArray();
avcc.writeByte(0x01); // avcC version 1
// profile, compatibility, level
avcc.writeByte(profile);
avcc.writeByte(compatibility);
avcc.writeByte(0x20); //level);
avcc.writeByte(0xff); // 111111 + 2 bit NAL size - 1
avcc.writeByte(0xe1); // number of SPS
avcc.writeByte(spsLength >> 8); // 16-bit SPS byte count
avcc.writeByte(spsLength);
avcc.writeBytes(spsNAL, 0, spsLength); // the SPS
avcc.writeByte(0x01); // number of PPS
avcc.writeByte(ppsLength >> 8); // 16-bit PPS byte count
avcc.writeByte(ppsLength);
avcc.writeBytes(ppsNAL, 0, ppsLength);
sequenceHeaderTag.data = avcc;
// clear the pps/sps til next buffer
var bytes : ByteArray = new ByteArray();
sequenceHeaderTag.write(bytes);
stream.appendBytes(bytes);
ppsNAL = null;
spsNAL = null;
} else {
if ((timestamp != currentTimestamp) || (currentVideoTag == null)) {
if (currentVideoTag != null) {
currentVideoTag.data = currentSegment;
var tagData : ByteArray = new ByteArray();
currentVideoTag.write(tagData);
stream.appendBytes(tagData);
}
currentVideoTag = new FLVTagVideo();
currentVideoTag.codecID = FLVTagVideo.CODEC_ID_AVC;
currentVideoTag.frameType = FLVTagVideo.FRAME_TYPE_INTER;
if (nalType == 5) {
currentVideoTag.frameType = FLVTagVideo.FRAME_TYPE_KEYFRAME;
}
lastNalType = nalType;
currentVideoTag.avcPacketType = FLVTagVideo.AVC_PACKET_TYPE_NALU;
currentVideoTag.timestamp = timestamp;
currentVideoTag.avcCompositionTimeOffset = 0;
currentSegment = new ByteArray();
currentTimestamp = timestamp;
}
mp4Payload.position = 0;
currentSegment.writeBytes(mp4Payload);
}
更新,一些詳細信息,這裏是正在傳遞的時間戳:
DEBUG: StreamPlayback: 66,-32,20,19
DEBUG: StreamPlayback: Timestamp: 0
DEBUG: StreamPlayback: Timestamp: 63
DEBUG: StreamPlayback: stream status update: netStatus NetStream.Buffer.Full
DEBUG: StreamPlayback: Timestamp: 137
DEBUG: StreamPlayback: Timestamp: 200
DEBUG: StreamPlayback: Timestamp: 264
DEBUG: StreamPlayback: Timestamp: 328
DEBUG: StreamPlayback: Timestamp: 403
DEBUG: StreamPlayback: Timestamp: 467
DEBUG: StreamPlayback: Timestamp: 531
DEBUG: StreamPlayback: Timestamp: 595
DEBUG: StreamPlayback: Timestamp: 659
DEBUG: StreamPlayback: Timestamp: 723
DEBUG: StreamPlayback: Timestamp: 830
DEBUG: StreamPlayback: Timestamp: 894
DEBUG: StreamPlayback: Timestamp: 958
DEBUG: StreamPlayback: Timestamp: 1021
DEBUG: StreamPlayback: Timestamp: 1086
DEBUG: StreamPlayback: Timestamp: 1161
DEBUG: StreamPlayback: Timestamp: 1225
DEBUG: StreamPlayback: Timestamp: 1289
DEBUG: StreamPlayback: Timestamp: 1353
DEBUG: StreamPlayback: Timestamp: 1418
DEBUG: StreamPlayback: Timestamp: 1491
DEBUG: StreamPlayback: Timestamp: 1556
DEBUG: StreamPlayback: Timestamp: 1633
DEBUG: StreamPlayback: Timestamp: 1684
DEBUG: StreamPlayback: Timestamp: 1747
DEBUG: StreamPlayback: stream status update: netStatus NetStream.Video.DimensionChange
DEBUG: StreamPlayback: Timestamp: 1811
乾杯,
Kev
可能是一個時間戳的問題?首先輸入兩個音頻標籤(連續追加),然後按照該順序追加一個視頻標籤(幀)......'bufferTime'等只是負責任何「提前」解碼,因此當播放頭到達它時,內容就已準備就緒。使用** H.264 **時,它不能被停止,因爲在顯示當前幀圖像之前,解碼器需要一組「圖片」(供參考)。 –
我實際上已經關閉atm音頻,所以它只是視頻流。時間戳由RTP時間戳除以90生成,以使其達到毫秒。我會看一看,看看裏面是否有東西混在一起。淨結果,但你說我不需要衝洗流,反正它應該立即播放。 –
是的,不要每次追加刷新。只要繼續追加和Flash解碼器照顧的事情。如果您使用'Reset_Seek',解碼器現在預期**關鍵幀**視頻標籤。所有音頻標籤都是音頻關鍵幀。 –