2016-08-22 47 views
14

新編輯下面只有首先播放曲目AVMutableComposition的()

我已經提到

AVMutableComposition - Only Playing First Track (Swift)

但它不提供答案,我所期待的。我有AVMutableComposition()。我正嘗試在此單一作文中應用單個類型AVMediaTypeVideo的多個AVCompositionTrack。這是因爲我使用了兩個不同的AVMediaTypeVideo來源,它們來自AVAsset的不同CGSizepreferredTransforms

因此,應用其指定的preferredTransforms的唯一方法是在2個不同的軌道中提供它們。但是,無論出於何種原因,只有第一首曲目會提供任何視頻,就好像第二首曲子永遠不會在那裏一樣。

所以,我一直在使用AVMutableVideoCompositionLayerInstruction的和應用與AVAssetExportSession,這工作好沿AVVideoComposition試圖

1),我還在工作的變換,但是做,能。但視頻的處理時間爲1分鐘,這在我的情況下是不適用的。

2)使用多個軌道,沒有AVAssetExportSession和同一類型的第二個軌道從不出現。現在,我可以將它全部放在1首曲目中,但所有視頻的大小和首選視頻都會變成第一部視頻,這絕對不是我想要的,因爲它將它們拉伸到各個方面。

所以我的問題是,是否有可能

1)申請指令只是一個音軌,而不用使用AVAssetExportSession? //首選方式BY FAR。

2)減少出口時間? (我曾嘗試過使用PresetPassthrough,但是如果您的exporter.videoComposition是我的使用說明,則不能使用,這是我知道的唯一可以放置說明的地方,不確定是否可以將它們放在其他地方。是我的一些代碼(不出口,因爲我並不需要任何地方出口任何東西,只是做的東西的AVMutableComposition結合了項目之後。

func merge() { 
    if let firstAsset = controller.firstAsset, secondAsset = self.asset { 

     let mixComposition = AVMutableComposition() 

     let firstTrack = mixComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, 
                    preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) 
     do { 
      //Don't need now according to not being able to edit first 14seconds. 

      if(CMTimeGetSeconds(startTime) == 0) { 
       self.startTime = CMTime(seconds: 1/600, preferredTimescale: Int32(600)) 
      } 
      try firstTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, CMTime(seconds: CMTimeGetSeconds(startTime), preferredTimescale: 600)), 
              ofTrack: firstAsset.tracksWithMediaType(AVMediaTypeVideo)[0], 
              atTime: kCMTimeZero) 
     } catch _ { 
      print("Failed to load first track") 
     } 


     //This secondTrack never appears, doesn't matter what is inside of here, like it is blank space in the video from startTime to endTime (rangeTime of secondTrack) 
     let secondTrack = mixComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, 
                    preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) 
//   secondTrack.preferredTransform = self.asset.preferredTransform 
     do { 
      try secondTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, secondAsset.duration), 
              ofTrack: secondAsset.tracksWithMediaType(AVMediaTypeVideo)[0], 
              atTime: CMTime(seconds: CMTimeGetSeconds(startTime), preferredTimescale: 600)) 
     } catch _ { 
      print("Failed to load second track") 
     } 

     //This part appears again, at endTime which is right after the 2nd track is suppose to end. 
     do { 
      try firstTrack.insertTimeRange(CMTimeRangeMake(CMTime(seconds: CMTimeGetSeconds(endTime), preferredTimescale: 600), firstAsset.duration-endTime), 
              ofTrack: firstAsset.tracksWithMediaType(AVMediaTypeVideo)[0] , 
              atTime: CMTime(seconds: CMTimeGetSeconds(endTime), preferredTimescale: 600)) 
     } catch _ { 
      print("failed") 
     } 
     if let loadedAudioAsset = controller.audioAsset { 
      let audioTrack = mixComposition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: 0) 
      do { 
       try audioTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, firstAsset.duration), 
               ofTrack: loadedAudioAsset.tracksWithMediaType(AVMediaTypeAudio)[0] , 
               atTime: kCMTimeZero) 
      } catch _ { 
       print("Failed to load Audio track") 
      } 
     } 
    } 
} 

編輯

蘋果指出,「指示說明視頻組成通過實施AVVideoCompositionInstruction協議的類的實例的NSArray。 對於數組中的第一條指令,timeRange.start必須小於或等於嘗試播放或其他處理的最早時間 (請注意,這通常是kCMTimeZero)。對於後續指令,timeRange.start必須等於先前指令的結束時間。 最後一條指令的結束時間必須大於或等於嘗試播放或其他處理的最後時間(請注意,這通常是AVVideoComposition實例與之關聯的資產的持續時間的 )。「

這只是說,如果你決定使用任何指令(這是我所理解的),那麼整個組合必須在指令內部進行分層,爲什麼會這樣呢?沒有在所有的應用變化的軌道1或3:0

音軌1 - 10秒,2道10 - 20秒,軌道3從20 - 30秒

上,任何的解釋可能會回答我的問題(如果它是可行的)。

+0

當你說_第二首曲目永遠不會在那裏_你的意思是你看到了作曲的背景,還是在第一首曲目之後立即停止播放? –

+0

我的意思是第一首曲目播放,它是空白的,當第二首曲目完成時,它會回到第一首曲目 – impression7vx

+0

你對第二首曲目有什麼變化?也許它只是位於videoComposition的框架之外。 –

回答

1

好吧,所以對於我確切的問題,我必須在Swift中應用特定的轉換CGAffineTransform以獲得我們想要的特定結果。目前一個我張貼任何圖片作品拍攝/獲得以及視頻

//This method gets the orientation of the current transform. This method is used below to determine the orientation 
func orientationFromTransform(_ transform: CGAffineTransform) -> (orientation: UIImageOrientation, isPortrait: Bool) { 
    var assetOrientation = UIImageOrientation.up 
    var isPortrait = false 
    if transform.a == 0 && transform.b == 1.0 && transform.c == -1.0 && transform.d == 0 { 
     assetOrientation = .right 
     isPortrait = true 
    } else if transform.a == 0 && transform.b == -1.0 && transform.c == 1.0 && transform.d == 0 { 
     assetOrientation = .left 
     isPortrait = true 
    } else if transform.a == 1.0 && transform.b == 0 && transform.c == 0 && transform.d == 1.0 { 
     assetOrientation = .up 
    } else if transform.a == -1.0 && transform.b == 0 && transform.c == 0 && transform.d == -1.0 { 
     assetOrientation = .down 
    } 

    //Returns the orientation as a variable 
    return (assetOrientation, isPortrait) 
} 

//Method that lays out the instructions for each track I am editing and does the transformation on each individual track to get it lined up properly 
func videoCompositionInstructionForTrack(_ track: AVCompositionTrack, _ asset: AVAsset) -> AVMutableVideoCompositionLayerInstruction { 

    //This method Returns set of instructions from the initial track 

    //Create inital instruction 
    let instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: track) 

    //This is whatever asset you are about to apply instructions to. 
    let assetTrack = asset.tracks(withMediaType: AVMediaTypeVideo)[0] 

    //Get the original transform of the asset 
    var transform = assetTrack.preferredTransform 

    //Get the orientation of the asset and determine if it is in portrait or landscape - I forget which, but either if you take a picture or get in the camera roll it is ALWAYS determined as landscape at first, I don't recall which one. This method accounts for it. 
    let assetInfo = orientationFromTransform(transform) 

    //You need a little background to understand this part. 
    /* MyAsset is my original video. I need to combine a lot of other segments, according to the user, into this original video. So I have to make all the other videos fit this size. 
     This is the width and height ratios from the original video divided by the new asset 
    */ 
    let width = MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.width/assetTrack.naturalSize.width 
    var height = MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.height/assetTrack.naturalSize.height 

    //If it is in portrait 
    if assetInfo.isPortrait { 

     //We actually change the height variable to divide by the width of the old asset instead of the height. This is because of the flip since we determined it is portrait and not landscape. 
     height = MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.height/assetTrack.naturalSize.width 

     //We apply the transform and scale the image appropriately. 
     transform = transform.scaledBy(x: height, y: height) 

     //We also have to move the image or video appropriately. Since we scaled it, it could be wayy off on the side, outside the bounds of the viewing. 
     let movement = ((1/height)*assetTrack.naturalSize.height)-assetTrack.naturalSize.height 

     //This lines it up dead center on the left side of the screen perfectly. Now we want to center it. 
     transform = transform.translatedBy(x: 0, y: movement) 

     //This calculates how much black there is. Cut it in half and there you go! 
     let totalBlackDistance = MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.width-transform.tx 
     transform = transform.translatedBy(x: 0, y: -(totalBlackDistance/2)*(1/height)) 

    } else { 

     //Landscape! We don't need to change the variables, it is all defaulted that way (iOS prefers landscape items), so we scale it appropriately. 
     transform = transform.scaledBy(x: width, y: height) 

     //This is a little complicated haha. So because it is in landscape, the asset fits the height correctly, for me anyway; It was just extra long. Think of this as a ratio. I forgot exactly how I thought this through, but the end product looked like: Answer = ((Original height/current asset height)*(current asset width))/(Original width) 
     let scale:CGFloat = ((MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.height/assetTrack.naturalSize.height)*(assetTrack.naturalSize.width))/MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.width 
     transform = transform.scaledBy(x: scale, y: 1) 

     //The asset can be way off the screen again, so we have to move it back. This time we can have it dead center in the middle, because it wasn't backwards because it wasn't flipped because it was landscape. Again, another long complicated algorithm I derived. 
     let movement = ((MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.width-((MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.height/assetTrack.naturalSize.height)*(assetTrack.naturalSize.width)))/2)*(1/MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.height/assetTrack.naturalSize.height) 
     transform = transform.translatedBy(x: movement, y: 0) 
    } 

    //This creates the instruction and returns it so we can apply it to each individual track. 
    instruction.setTransform(transform, at: kCMTimeZero) 
    return instruction 
} 

現在我們有了這些方法,我們現在可以適當地應用正確的和適當的轉換,以我們的資產,並得到一切裝修不錯,清潔。

func merge() { 
if let firstAsset = MyAsset, let newAsset = newAsset { 

     //This creates our overall composition, our new video framework 
     let mixComposition = AVMutableComposition() 

     //One by one you create tracks (could use loop, but I just had 3 cases) 
     let firstTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo, 
                    preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) 

     //You have to use a try, so need a do 
     do { 

      //Inserting a timerange into a track. I already calculated my time, I call it startTime. This is where you would put your time. The preferredTimeScale doesn't have to be 600000 haha, I was playing with those numbers. It just allows precision. At is not where it begins within this individual track, but where it starts as a whole. As you notice below my At times are different You also need to give it which track 
      try firstTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, CMTime(seconds: CMTimeGetSeconds(startTime), preferredTimescale: 600000)), 
              of: firstAsset.tracks(withMediaType: AVMediaTypeVideo)[0], 
              at: kCMTimeZero) 
     } catch _ { 
      print("Failed to load first track") 
     } 

     //Create the 2nd track 
     let secondTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo, 
                     preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) 

     do { 

      //Apply the 2nd timeRange you have. Also apply the correct track you want 
      try secondTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, self.endTime-self.startTime), 
              of: newAsset.tracks(withMediaType: AVMediaTypeVideo)[0], 
              at: CMTime(seconds: CMTimeGetSeconds(startTime), preferredTimescale: 600000)) 
      secondTrack.preferredTransform = newAsset.preferredTransform 
     } catch _ { 
      print("Failed to load second track") 
     } 

     //We are not sure we are going to use the third track in my case, because they can edit to the end of the original video, causing us not to use a third track. But if we do, it is the same as the others! 
     var thirdTrack:AVMutableCompositionTrack! 
     if(self.endTime != controller.realDuration) { 
      thirdTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo, 
                     preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) 

     //This part appears again, at endTime which is right after the 2nd track is suppose to end. 
      do { 
       try thirdTrack.insertTimeRange(CMTimeRangeMake(CMTime(seconds: CMTimeGetSeconds(endTime), preferredTimescale: 600000), self.controller.realDuration-endTime), 
              of: firstAsset.tracks(withMediaType: AVMediaTypeVideo)[0] , 
              at: CMTime(seconds: CMTimeGetSeconds(endTime), preferredTimescale: 600000)) 
      } catch _ { 
       print("failed") 
      } 
     } 

     //Same thing with audio! 
     if let loadedAudioAsset = controller.audioAsset { 
      let audioTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: 0) 
      do { 
       try audioTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, self.controller.realDuration), 
               of: loadedAudioAsset.tracks(withMediaType: AVMediaTypeAudio)[0] , 
               at: kCMTimeZero) 
      } catch _ { 
       print("Failed to load Audio track") 
      } 
     } 

     //So, now that we have all of these tracks we need to apply those instructions! If we don't, then they could be different sizes. Say my newAsset is 720x1080 and MyAsset is 1440x900 (These are just examples haha), then it would look a tad funky and possibly not show our new asset at all. 
     let mainInstruction = AVMutableVideoCompositionInstruction() 

     //Make sure the overall time range matches that of the individual tracks, if not, it could cause errors. 
     mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, self.controller.realDuration) 

     //For each track we made, we need an instruction. Could set loop or do individually as such. 
     let firstInstruction = videoCompositionInstructionForTrack(firstTrack, firstAsset) 
     //You know, not 100% why this is here. This is 1 thing I did not look into well enough or understand enough to describe to you. 
     firstInstruction.setOpacity(0.0, at: startTime) 

     //Next Instruction 
     let secondInstruction = videoCompositionInstructionForTrack(secondTrack, self.asset) 

     //Again, not sure we need 3rd one, but if we do. 
     var thirdInstruction:AVMutableVideoCompositionLayerInstruction! 
     if(self.endTime != self.controller.realDuration) { 
      secondInstruction.setOpacity(0.0, at: endTime) 
      thirdInstruction = videoCompositionInstructionForTrack(thirdTrack, firstAsset) 
     } 

     //Okay, now that we have all these instructions, we tie them into the main instruction we created above. 
     mainInstruction.layerInstructions = [firstInstruction, secondInstruction] 
     if(self.endTime != self.controller.realDuration) { 
      mainInstruction.layerInstructions += [thirdInstruction] 
     } 

     //We create a video framework now, slightly different than the one above. 
     let mainComposition = AVMutableVideoComposition() 

     //We apply these instructions to the framework 
     mainComposition.instructions = [mainInstruction] 

     //How long are our frames, you can change this as necessary 
     mainComposition.frameDuration = CMTimeMake(1, 30) 

     //This is your render size of the video. 720p, 1080p etc. You set it! 
     mainComposition.renderSize = firstAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize 

     //We create an export session (you can't use PresetPassthrough because we are manipulating the transforms of the videos and the quality, so I just set it to highest) 
     guard let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality) else { return } 

     //Provide type of file, provide the url location you want exported to (I don't have mine posted in this example). 
     exporter.outputFileType = AVFileTypeMPEG4 
     exporter.outputURL = url 

     //Then we tell the exporter to export the video according to our video framework, and it does the work! 
     exporter.videoComposition = mainComposition 

     //Asynchronous methods FTW! 
     exporter.exportAsynchronously(completionHandler: { 
      //Do whatever when it finishes! 
     }) 
    } 
} 

這裏有很多事情要做,但是我必須這樣做,無論如何我的例子!對不起,花了這麼長的時間發佈,並讓我知道你是否有問題。

1

是你完全可以只申請一個個人轉變成AVMuta的每一層bleComposition。

繼承人的過程的概述 - 伊夫親自在Objective-C這樣做,雖然,所以我不能給你確切的SWIFT代碼,但我知道這些相同功能的工作只是在斯威夫特一樣。

  1. 創建AVMutableComposition。
  2. 創建一個AVMutableVideoComposition。
  3. 設置視頻合成的渲染大小和幀持續時間。
  4. 現在每個AVAsset:
    • 創建AVAssetTrack和AVAudioTrack。
    • 通過將每個添加到mutableComposition中,爲每個視頻創建一個AVMutableCompositionTrack(一個用於視頻,一個用於音頻)。

這裏變得更爲複雜。(對不起AVFoundation不容易!)

  • 創建從指的是每個視頻AVAssetTrack的AVMutableCompositionLayerInstruction。對於每個AVMutableCompositionLayerInstruction,你可以設置它的變換。你也可以做一些事情,比如設置裁剪矩形。
  • 將每個AVMutableCompositionLayerInstruction添加到一個layerinstructions數組中。當所有的AVMutableCompositionLayerInstructions被創建時,數組被設置在AVMutableVideoComposition上。
  • 最後..

  • 最後,你將有你將用來玩這個後面(上AVPlayer)的AVPlayerItem。您使用AVMutableComposition創建AVPlayerItem,然後在AVPlayerItem本身上設置AVMutableVideoComposition(setVideoComposition ..)
  • Easy eh?

    我花了幾個星期纔得到這個東西運作良好。它完全無情,正如你所提到的,如果你做錯了什麼,它不會告訴你你做錯了什麼 - 它不會出現。

    但是,當你破解它,它完全工作得很快,很好。

    最後,所有我所概述的東西是在AVFoundation文檔可用。它是一個漫長的,但你需要知道它來實現你想要做的。

    祝你好運!

    +0

    我感謝您的幫助,已經找到答案。只是沒有發佈它。謝謝你! – impression7vx

    +0

    @ impression7vx有任何進展嗎?任何幫助社區的東西?用這個命中一個路障,並沒有找到一個好的答案。謝謝! – simplexity

    +0

    是的男人。我昨天做了手術,因此有時間回家並在今天或明天發佈一些代碼。涼? – impression7vx

    相關問題