2017-04-15 46 views
0

爲了在Swift 3中保存更改的視頻,我使用了什麼功能?我們用下面的功能,以捕捉圖像的整個屏幕,例如:將更改後的視頻保存到Swift 3

UIGraphicsBeginImageContext(self.view.frame.size) 
     if let ctx = UIGraphicsGetCurrentContext() { 

      self.view.layer.render(in: ctx) 
      let renderedImage = UIGraphicsGetImageFromCurrentImageContext() 
      UIGraphicsEndImageContext() 

但是我們怎麼才能保存上有一些繪圖視頻使用?

謝謝! GIF示例已附帶。

回答

0

這是一個相當複雜的任務。

我不希望這個答案是更大的比它已經是了,所以我會假設一對夫婦的大事情:

  • 您已經有視頻文件,無需圖紙
  • 你已經有一個繪圖

編輯視頻iOS使用AVFoundation框架,因此您需要將其導入到您的課堂。 編輯功能應該是這樣的:

//Input are video (AVAsset) and image that you already have 
func addOverlayTo(asset: AVAsset, overlayImage:UIImage?) { 
    //this object will be our new video. It describes what will be in it 
    let mixComposition = AVMutableComposition() 
    //we tell our composition that there will be video track in it 
    let videoTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid) 
    //we add our video file to that track 
    try! videoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, asset.duration), 
             of: asset.tracks(withMediaType: AVMediaTypeVideo)[0] , 
             at: kCMTimeZero) 
    //this object tells how to display our video 
    let mainCompositionInst = AVMutableVideoComposition() 
    //in iOS videos are always stored in landscape right orientation 
    //so to orient and size everything properly we have to look at transform property of asset 
    let size = determineRenderSize(for: asset) 
    //these steps are necessary only if our video has multiple layers 
    if overlayImage != nil { 
     //create all necessary layers 
     let videoLayer = CALayer() 
     videoLayer.frame = CGRect(origin: CGPoint(x: 0, y: 0), size: size) 
     let parentLayer = CALayer() 
     parentLayer.frame = CGRect(origin: CGPoint(x: 0, y: 0), size: size) 
     parentLayer.addSublayer(videoLayer) 
     if overlayImage != nil{ 
      let overlayLayer = CALayer() 
      overlayLayer.contents = overlayImage?.cgImage 
      overlayLayer.frame = CGRect(origin: CGPoint(x: 0, y: 0), size: size) 
      parentLayer.addSublayer(overlayLayer) 
     } 
     //layout layers properly 
     mainCompositionInst.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: parentLayer) 
    } 
    let mainInstruction = AVMutableVideoCompositionInstruction() 
    //this object will rotate our video to proper orientation 
    let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack) 
    layerInstruction.setTransform(videoTrack.preferredTransform, at: kCMTimeZero) 
    mainInstruction.layerInstructions = [layerInstruction] 
    mainCompositionInst.instructions = [mainInstruction] 
    //now we have to fill all properties of our composition instruction 
    //their names are quite informative so I won't comment much 
    mainCompositionInst.renderSize = size 
    mainCompositionInst.renderScale = 1.0 
    //assumed standard 30 fps. It's written as 20/600 because videos 
    //from built in phone camera have default time scale 600 
    mainCompositionInst.frameDuration = CMTimeMake(20,600) 
    mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, asset.duration) 
    //now we need to save our new video to phone memory 
    //object that will do it 
    let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality)! 
    //create a path where our video will be saved 
    let documentDirectory = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0] 
    let outputPath = documentDirectory + "your_file_name.mp4" 
    //if there already is a file with this path export will fail 
    if FileManager.default.fileExists(atPath: outputPath) { 
     try! FileManager.default.removeItem(atPath: outputPath) 
    } 
    exporter.outputURL = URL.init(fileURLWithPath: outputPath) 
    //again a bunch of parameters that have to be filled. These a pretty standard though 
    exporter.outputFileType = AVFileTypeQuickTimeMovie 
    exporter.shouldOptimizeForNetworkUse = true 
    exporter.videoComposition = mainCompositionInst 
    exporter.timeRange = CMTimeRangeMake(kCMTimeZero, asset.duration) 
    exporter.exportAsynchronously {() -> Void in 
     if exporter.error == nil && exporter.status == .completed{ 
      print("SAVED!") 
     } 
     else{ 
      print(exporter.error! 
     } 
    } 

以及確定的功能定位:

func determineRenderSize(for asset: AVAsset) -> CGSize { 
    let videoTrack = asset.tracks(withMediaType: AVMediaTypeVideo)[0] 
    let size = videoTrack.naturalSize 
    let txf = videoTrack.preferredTransform 
    print("transform is ", txf) 
    if (size.height == txf.tx && txf.ty == 0){ 
     return CGSize(width: size.height, height: size.width) //portrait 
    } 
    else if (txf.tx == size.width && txf.ty == size.height){ 
     return size //landscape left 
    } 
    else if (txf.tx == 0 && txf.ty == size.width){ 
     return CGSize(width: size.height, height: size.width) //upside down 
    } 
    else{ 
     return size //landscape right 
    } 
} 

這裏有不同參數的批次,但解釋這一切會佔用太多空間,因此要學會更多關於他們我推薦閱讀一些關於iOS視頻編輯的教程。 幾個好的是https://www.raywenderlich.com/13418/how-to-play-record-edit-videos-in-ioshttps://www.raywenderlich.com/30200/avfoundation-tutorial-adding-overlays-and-animations-to-videos

+0

Kitmap,謝謝你的回覆,我會盡力的! –

+0

歡迎您!希望能幫助到你 – Kitmap