iOS基於AVFoundation 制作用於剪輯視頻項目
最近做瞭一個剪輯視頻的小項目,踩瞭一些小坑,但還是有驚無險的實現瞭功能。
其實 Apple 官方也給瞭一個 UIVideoEditController 讓我們來做視頻的處理,但難以進行擴展或者自定義,所以咱們就用 Apple 給的一個框架 AVFoundation 來開發自定義的視頻處理。
而且發現網上並沒有相關的並且比較系統的資料,於是寫下瞭本文,希望能對也在做視頻處理方面的新手(比如我)能帶來幫助。
項目效果圖
項目的功能大概就是對視頻軌道的撤銷、分割、刪除還有拖拽視頻塊來對視頻擴展或者回退的功能
功能實現
一、選取視頻並播放
通過 UIImagePickerController 選取視頻並且跳轉到自定義的編輯控制器
這一部分沒什麼好說的
示例:
//選擇視頻 @objc func selectVideo() { if UIImagePickerController.isSourceTypeAvailable(.photoLibrary) { //初始化圖片控制器 let imagePicker = UIImagePickerController() //設置代理 imagePicker.delegate = self //指定圖片控制器類型 imagePicker.sourceType = .photoLibrary //隻顯示視頻類型的文件 imagePicker.mediaTypes = [kUTTypeMovie as String] //彈出控制器,顯示界面 self.present(imagePicker, animated: true, completion: nil) } else { print("讀取相冊錯誤") } } func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) { //獲取視頻路徑(選擇後視頻會自動復制到app臨時文件夾下) guard let videoURL = info[UIImagePickerController.InfoKey.mediaURL] as? URL else { return } let pathString = videoURL.relativePath print("視頻地址:\(pathString)") //圖片控制器退出 self.dismiss(animated: true, completion: { let editorVC = EditorVideoViewController.init(with: videoURL) editorVC.modalPresentationStyle = UIModalPresentationStyle.fullScreen self.present(editorVC, animated: true) { } }) }
二、按幀獲取縮略圖初始化視頻軌道
CMTime
在講實現方法之前先介紹一下 CMTime,CMTime 可以用於描述更精確的時間,比如我們想表達視頻中的一個瞬間例如 1:01 大多數時候你可以用 NSTimeInterval t = 61.0 這是沒有什麼大問題的,但浮點數有個比較嚴重的問題就是無法精確的表達10的-6次方比如將一百萬個0.0000001相加,運算結果可能會變成1.0000000000079181,在視頻流傳輸的過程中伴隨著大量的數據加減,這樣就會造成誤差,所以我們需要另一種表達時間的方式,那就是 CMTime
CMTime是一種C函數結構體,有4個成員。
typedef struct {
CMTimeValue value; // 當前的CMTimeValue 的值
CMTimeScale timescale; // 當前的CMTimeValue 的參考標準 (比如:1000)
CMTimeFlags flags;
CMTimeEpoch epoch;
} CMTime;
比如說平時我們所說的如果 timescale = 1000,那麼 CMTimeValue = 1000 * 1 = 100
CMTimeScale timescale: 當前的CMTimeValue 的參考標準,它表示1秒的時間被分成瞭多少份。因為整個CMTime的精度是由它控制的所以它顯的尤為重要。例如,當timescale為1的時候,CMTime不能表示1秒一下的時間和1秒內的增長。相同的,當timescale為1000的時候,每秒鐘便被分成瞭1000份,CMTime的value便代表瞭多少毫秒。
實現方法
調用方法 generateCGImagesAsynchronously(forTimes requestedTimes: [NSValue], completionHandler handler: @escaping AVAssetImageGeneratorCompletionHandler)
/** @method generateCGImagesAsynchronouslyForTimes:completionHandler: @abstract Returns a series of CGImageRefs for an asset at or near the specified times. @param requestedTimes An NSArray of NSValues, each containing a CMTime, specifying the asset times at which an image is requested. @param handler A block that will be called when an image request is complete. @discussion Employs an efficient "batch mode" for getting images in time order. The client will receive exactly one handler callback for each requested time in requestedTimes. Changes to generator properties (snap behavior, maximum size, etc...) will not affect outstanding asynchronous image generation requests. The generated image is not retained. Clients should retain the image if they wish it to persist after the completion handler returns. */ open func generateCGImagesAsynchronously(forTimes requestedTimes: [NSValue], completionHandler handler: @escaping AVAssetImageGeneratorCompletionHandler)
瀏覽官方的註釋,可以看出需要傳入兩個參數 :
requestedTimes: [NSValue]:請求時間的數組(類型為 NSValue)每一個元素包含一個 CMTime,用於指定請求視頻的時間。
completionHandler handler: @escaping AVAssetImageGeneratorCompletionHandler: 圖像請求完成時將調用的塊,由於方法是異步調用的,所以需要返回主線程更新 UI。
示例:
func splitVideoFileUrlFps(splitFileUrl:URL, fps:Float, splitCompleteClosure:@escaping (Bool, [UIImage]) -> Void) { var splitImages = [UIImage]() //初始化 Asset let optDict = NSDictionary(object: NSNumber(value: false), forKey: AVURLAssetPreferPreciseDurationAndTimingKey as NSCopying) let urlAsset = AVURLAsset(url: splitFileUrl, options: optDict as? [String : Any]) let cmTime = urlAsset.duration let durationSeconds: Float64 = CMTimeGetSeconds(cmTime) var times = [NSValue]() let totalFrames: Float64 = durationSeconds * Float64(fps) var timeFrame: CMTime //定義 CMTime 即請求縮略圖的時間間隔 for i in 0...Int(totalFrames) { timeFrame = CMTimeMake(value: Int64(i), timescale: Int32(fps)) let timeValue = NSValue(time: timeFrame) times.append(timeValue) } let imageGenerator = AVAssetImageGenerator(asset: urlAsset) imageGenerator.requestedTimeToleranceBefore = CMTime.zero imageGenerator.requestedTimeToleranceAfter = CMTime.zero let timesCount = times.count //調用獲取縮略圖的方法 imageGenerator.generateCGImagesAsynchronously(forTimes: times) { (requestedTime, image, actualTime, result, error) in var isSuccess = false switch (result) { case AVAssetImageGenerator.Result.cancelled: print("cancelled------") case AVAssetImageGenerator.Result.failed: print("failed++++++") case AVAssetImageGenerator.Result.succeeded: let framImg = UIImage(cgImage: image!) splitImages.append(self.flipImage(image: framImg, orientaion: 1)) if (Int(requestedTime.value) == (timesCount-1)) { //最後一幀時 回調賦值 isSuccess = true splitCompleteClosure(isSuccess, splitImages) print("completed") } } } } //調用時利用回調更新 UI self.splitVideoFileUrlFps(splitFileUrl: url, fps: 1) { [weak self](isSuccess, splitImgs) in if isSuccess { //由於方法是異步的,所以需要回主線程更新 UI DispatchQueue.main.async { } print("圖片總數目imgcount:\(String(describing: self?.imageArr.count))") } }
三、視頻指定時間跳轉
/** @method seekToTime:toleranceBefore:toleranceAfter: @abstract Moves the playback cursor within a specified time bound. @param time @param toleranceBefore @param toleranceAfter @discussion Use this method to seek to a specified time for the current player item. The time seeked to will be within the range [time-toleranceBefore, time+toleranceAfter] and may differ from the specified time for efficiency. Pass kCMTimeZero for both toleranceBefore and toleranceAfter to request sample accurate seeking which may incur additional decoding delay. Messaging this method with beforeTolerance:kCMTimePositiveInfinity and afterTolerance:kCMTimePositiveInfinity is the same as messaging seekToTime: directly. */ open func seek(to time: CMTime, toleranceBefore: CMTime, toleranceAfter: CMTime)
三個傳入的參數 time: CMTime, toleranceBefore: CMTime, tolearnceAfter: CMTime ,time 參數很好理解,即為想要跳轉的時間。那麼後面兩個參數,按照官方的註釋理解,簡單來說為“誤差的容忍度”,他將會在你擬定的這個區間內跳轉,即為 [time-toleranceBefore, time+toleranceAfter] ,當然如果你傳 kCMTimeZero(在我當前的版本這個參數被被改為瞭 CMTime.zero),即為精確搜索,但是這會導致額外的解碼時間。
示例:
let totalTime = self.avPlayer.currentItem?.duration let scale = self.avPlayer.currentItem?.duration.timescale //width:跳轉到的視頻軌長度 videoWidth:視頻軌總長度 let process = width / videoWidth //快進函數 self.avPlayer.seek(to: CMTimeMake(value: Int64(totalTime * process * scale!), timescale: scale!), toleranceBefore: CMTime.zero, toleranceAfter: CMTime.zero)
四、播放器監聽
通過播放器的監聽我們可以改變控制軌道的移動,達到視頻播放器和視頻軌道的聯動
/** @method addPeriodicTimeObserverForInterval:queue:usingBlock: @abstract Requests invocation of a block during playback to report changing time. @param interval The interval of invocation of the block during normal playback, according to progress of the current time of the player. @param queue The serial queue onto which block should be enqueued. If you pass NULL, the main queue (obtained using dispatch_get_main_queue()) will be used. Passing a concurrent queue to this method will result in undefined behavior. @param block The block to be invoked periodically. @result An object conforming to the NSObject protocol. You must retain this returned value as long as you want the time observer to be invoked by the player. Pass this object to -removeTimeObserver: to cancel time observation. @discussion The block is invoked periodically at the interval specified, interpreted according to the timeline of the current item. The block is also invoked whenever time jumps and whenever playback starts or stops. If the interval corresponds to a very short interval in real time, the player may invoke the block less frequently than requested. Even so, the player will invoke the block sufficiently often for the client to update indications of the current time appropriately in its end-user interface. Each call to -addPeriodicTimeObserverForInterval:queue:usingBlock: should be paired with a corresponding call to -removeTimeObserver:. Releasing the observer object without a call to -removeTimeObserver: will result in undefined behavior. */ open func addPeriodicTimeObserver(forInterval interval: CMTime, queue: DispatchQueue?, using block: @escaping (CMTime) -> Void) -> Any
比較重要的一個參數是 interval: CMTime 這決定瞭代碼回調的間隔時間,同時如果你在這個回調裡改變視頻軌道的 frame 那麼這也會決定視頻軌道移動的流暢度
示例:
//player的監聽 self.avPlayer.addPeriodicTimeObserver(forInterval: CMTimeMake(value: 1, timescale: 120), queue: DispatchQueue.main) { [weak self](time) in //與軌道的聯動操作 }
與快進方法沖突的問題
這個監聽方法和第三點中的快進方法會造成一個問題:當你拖動視頻軌道並且去快進的時候也會觸發這個回調於是就造成瞭 拖動視頻軌道 frame (改變 frame) -> 快進方法 -> 觸發回調 -> 改變 frame 這一個死循環。那麼就得添加判斷條件來不去觸發這個回調。
快進方法與播放器聯動帶來的問題
播放視頻是異步的,並且快進方法解碼視頻需要時間,所以就導致瞭在雙方聯動的過程中帶來的時間差。並且當你認為視頻已經快進完成的時候,想要去改變視頻軌道的位置,由於解碼帶來的時間,導致瞭在回調的時候會傳入幾個錯誤的時間,使得視頻軌道來回晃動。所以當前項目的做法是,回調時需要判斷將要改變的 frame 是否合法(是否過大、過小)
ps:如果關於這兩個問題有更好的解決辦法,歡迎一起討論!
五、導出視頻
/** @method insertTimeRange:ofTrack:atTime:error: @abstract Inserts a timeRange of a source track into a track of a composition. @param timeRange Specifies the timeRange of the track to be inserted. @param track Specifies the source track to be inserted. Only AVAssetTracks of AVURLAssets and AVCompositions are supported (AVCompositions starting in MacOS X 10.10 and iOS 8.0). @param startTime Specifies the time at which the inserted track is to be presented by the composition track. You may pass kCMTimeInvalid for startTime to indicate that the timeRange should be appended to the end of the track. @param error Describes failures that may be reported to the user, e.g. the asset that was selected for insertion in the composition is restricted by copy-protection. @result A BOOL value indicating the success of the insertion. @discussion You provide a reference to an AVAssetTrack and the timeRange within it that you want to insert. You specify the start time in the target composition track at which the timeRange should be inserted. Note that the inserted track timeRange will be presented at its natural duration and rate. It can be scaled to a different duration (and presented at a different rate) via -scaleTimeRange:toDuration:. */ open func insertTimeRange(_ timeRange: CMTimeRange, of track: AVAssetTrack, at startTime: CMTime) throws
傳入的三個參數:
timeRange: CMTimeRange:
指定要插入的視頻的時間范圍
track: AVAssetTrack:
指定要插入的視頻軌道。僅支持AVURLAssets和AVCompositions的AvassetTrack(從MacOS X 10.10和iOS 8.0開始的AVCompositions)。
starTime: CMTime:
指定合成視頻插入的時間點。可以傳遞kCMTimeInvalid 參數,以指定視頻應附加到前一個視頻的末尾。
示例:
let composition = AVMutableComposition() //合並視頻、音頻軌道 let videoTrack = composition.addMutableTrack( withMediaType: AVMediaType.video, preferredTrackID: CMPersistentTrackID()) let audioTrack = composition.addMutableTrack( withMediaType: AVMediaType.audio, preferredTrackID: CMPersistentTrackID()) let asset = AVAsset.init(url: self.url) var insertTime: CMTime = CMTime.zero let timeScale = self.avPlayer.currentItem?.duration.timescale //循環每個片段的信息 for clipsInfo in self.clipsInfoArr { //片段的總時間 let clipsDuration = Double(Float(clipsInfo.width) / self.videoWidth) * self.totalTime //片段的開始時間 let startDuration = -Float(clipsInfo.offset) / self.perSecondLength do { try videoTrack?.insertTimeRange(CMTimeRangeMake(start: CMTimeMake(value: Int64(startDuration * Float(timeScale!)), timescale: timeScale!), duration:CMTimeMake(value: Int64(clipsDuration * Double(timeScale!)), timescale: timeScale!)), of: asset.tracks(withMediaType: AVMediaType.video)[0], at: insertTime) } catch _ {} do { try audioTrack?.insertTimeRange(CMTimeRangeMake(start: CMTimeMake(value: Int64(startDuration * Float(timeScale!)), timescale: timeScale!), duration:CMTimeMake(value: Int64(clipsDuration * Double(timeScale!)), timescale: timeScale!)), of: asset.tracks(withMediaType: AVMediaType.audio)[0], at: insertTime) } catch _ {} insertTime = CMTimeAdd(insertTime, CMTimeMake(value: Int64(clipsDuration * Double(timeScale!)), timescale: timeScale!)) } videoTrack?.preferredTransform = CGAffineTransform(rotationAngle: CGFloat.pi / 2) //獲取合並後的視頻路徑 let documentsPath = NSSearchPathForDirectoriesInDomains(.documentDirectory,.userDomainMask,true)[0] let destinationPath = documentsPath + "/mergeVideo-\(arc4random()%1000).mov" print("合並後的視頻:\(destinationPath)")
end:通過這幾個 API 再加上交互的邏輯就能實現完整的剪輯功能啦!如果文中有不足的地方,歡迎指出!
到此這篇關於iOS基於AVFoundation 制作用於剪輯視頻項目的文章就介紹到這瞭,更多相關iOS AVFoundation 剪輯視頻內容請搜索WalkonNet以前的文章或繼續瀏覽下面的相關文章希望大傢以後多多支持WalkonNet!