2014-11-22 89 views
3

我想在Xamarin/Monotouch中做一些基本的視頻合成,並且取得了一些成功,但我堅持看起來是一個相當簡單的任務。AVAssetExportSession - 視頻不能合成

我以相機的方式錄製視頻中的視頻,所以我使用AVAssetExportSession旋轉視頻。我已經創建了一個圖層指令來旋轉視頻,效果很好。我能夠以正確的方向成功導出視頻。

的問題:

當我添加音軌到出口我總是得到這個錯誤失敗的響應:

Domain=AVFoundationErrorDomain Code=-11841 "Operation Stopped" UserInfo=0x1912c320 {NSLocalizedDescription=Operation Stopped, NSLocalizedFailureReason=The video could not be composed.}

如果我不設置videoComposition財產上的exportSession音視頻導出完全正確,只是方向錯誤。如果任何人都可以提供一些建議,將不勝感激。下面是我的代碼:

var composition = new AVMutableComposition(); 
       var compositionTrackAudio = composition.AddMutableTrack(AVMediaType.Audio, 0); 
       var compositionTrackVideo = composition.AddMutableTrack(AVMediaType.Video, 0); 
       var videoCompositionInstructions = new AVVideoCompositionInstruction[files.Count]; 
      var index = 0; 
      var renderSize = new SizeF(480, 480); 
      var _startTime = CMTime.Zero; 
      //AVUrlAsset asset; 



      var asset = new AVUrlAsset(new NSUrl(file, false), new AVUrlAssetOptions()); 
      //var asset = AVAsset.FromUrl(new NSUrl(file, false)); 


      //create an avassetrack with our asset 
      var videoTrack = asset.TracksWithMediaType(AVMediaType.Video)[0]; 
      var audioTrack = asset.TracksWithMediaType(AVMediaType.Audio)[0]; 

      //create a video composition and preset some settings 

      NSError error; 

      var assetTimeRange = new CMTimeRange { Start = CMTime.Zero, Duration = asset.Duration }; 

      compositionTrackAudio.InsertTimeRange(new CMTimeRange 
      { 
       Start = CMTime.Zero, 
       Duration = asset.Duration, 
      }, audioTrack, _startTime, out error); 

      if (error != null) { 
       Debug.WriteLine (error.Description); 
      } 

      compositionTrackVideo.InsertTimeRange(assetTimeRange, videoTrack, _startTime, out error); 

      //create a video instruction 


      var transformer = new AVMutableVideoCompositionLayerInstruction 
      { 
       TrackID = videoTrack.TrackID, 
      }; 

      var audioMix = new AVMutableAudioMix(); 
      var mixParameters = new AVMutableAudioMixInputParameters{ 
       TrackID = audioTrack.TrackID 
      }; 

      mixParameters.SetVolumeRamp (1.0f, 1.0f, new CMTimeRange { 
       Start = CMTime.Zero, 
       Duration = asset.Duration 
      }); 


      audioMix.InputParameters = new [] { mixParameters }; 
      var t1 = CGAffineTransform.MakeTranslation(videoTrack.NaturalSize.Height, 0); 
      //Make sure the square is portrait 
      var t2 = CGAffineTransform.Rotate(t1, (float)(Math.PI/2f)); 
      var finalTransform = t2; 

      transformer.SetTransform(finalTransform, CMTime.Zero); 
      //add the transformer layer instructions, then add to video composition 


      var instruction = new AVMutableVideoCompositionInstruction 
      { 
       TimeRange = assetTimeRange, 
       LayerInstructions = new []{ transformer } 
      }; 
      videoCompositionInstructions[index] = instruction; 
      index++; 
      _startTime = CMTime.Add(_startTime, asset.Duration); 

      var videoComposition = new AVMutableVideoComposition(); 
      videoComposition.FrameDuration = new CMTime(1 , (int)videoTrack.NominalFrameRate); 
      videoComposition.RenderScale = 1; 
      videoComposition.Instructions = videoCompositionInstructions; 
      videoComposition.RenderSize = renderSize; 

      var exportSession = new AVAssetExportSession(composition, AVAssetExportSession.PresetHighestQuality); 

      var filePath = _fileSystemManager.TempDirectory + DateTime.UtcNow.Ticks + ".mp4"; 

      var outputLocation = new NSUrl(filePath, false); 

      exportSession.OutputUrl = outputLocation; 
      exportSession.OutputFileType = AVFileType.Mpeg4; 
      exportSession.VideoComposition = videoComposition; 
      exportSession.AudioMix = audioMix; 
      exportSession.ShouldOptimizeForNetworkUse = true; 
      exportSession.ExportAsynchronously(() => 
      { 
       Debug.WriteLine(exportSession.Status); 

       switch (exportSession.Status) 
       { 

        case AVAssetExportSessionStatus.Failed: 
         { 
          Debug.WriteLine(exportSession.Error.Description); 
          Debug.WriteLine(exportSession.Error.DebugDescription); 
          break; 
         } 
        case AVAssetExportSessionStatus.Completed: 
         { 
          if (File.Exists(filePath)) 
          { 
           _uploadService.AddVideoToVideoByteList(File.ReadAllBytes(filePath), ".mp4"); 
           Task.Run(async() => 
           { 
            await _uploadService.UploadVideo(_videoData); 
           }); 
          } 
          break; 
         } 
        case AVAssetExportSessionStatus.Unknown: 
         { 
          break; 
         } 
        case AVAssetExportSessionStatus.Exporting: 
         { 
          break; 
         } 
        case AVAssetExportSessionStatus.Cancelled: 
         { 
          break; 
         } 

       } 
      }); 

回答

3

所以這是一個非常愚蠢的錯誤是由於添加視頻前的音軌在這樣的指示必須一直試圖應用轉換到音頻軌道,而不是我的視頻跟蹤。

+0

嗨,我得到了完全相同的錯誤,我可以知道你編輯解決這個問題嗎? – 2016-08-27 10:27:03

+0

嗨@洪州,這已經有一段時間了,但是正如上面所述,我的資產順序是錯誤的。我的代碼不再像以前的工作一樣。 – beedubsss 2016-08-31 05:28:57

0

我的問題是,我忘了設置TIMERANGE,它應該是這樣的

let instruction = AVMutableVideoCompositionInstruction() 
instruction.layerInstructions = [layer] 
instruction.timeRange = CMTimeRange(start: kCMTimeZero, duration: videoDuration) 

注意AVMutableVideoCompositionInstruction.timeRange的結束時間必須是有效的。它不同於AVAssetExportSession.timeRange

The time range to be exported from the source. The default time range of an export session is kCMTimeZero to kCMTimePositiveInfinity, meaning that (modulo a possible limit on file length) the full duration of the asset will be exported. You can observe this property using Key-value observing.

相關問題