我有一个父视图,其中包含两个带有 AVPlayerLayers 的视图和一个 UIImageView。我想将所有这些组合成一个新视频,以捕获父视图的所有内容。
我查看了 ReplayKit,但它没有捕获 AVPlayer 的内容;它不允许我访问视频;它捕获整个屏幕而不是特定的视图或帧。
我的一般方法是逐帧迭代视频,捕获帧的图像,将它们设置在我覆盖在 playerLayer 上的 imageView 中,然后使用UIGraphicsGetImageFromCurrentImageContext
--捕获父视图的图像,然后制作视频在所有这些图像中。
我尝试了一些 AVFoundation 选项,但总体而言它们的性能并不是那么好。以下是我尝试过的一些选项,总是尝试上述模式。
简单地使用设置视频帧videoPlayer.seek(to: frame)
- 但这种方法非常慢:每 15 秒视频大约需要 42 秒以这种方式遍历每一帧。
使用 异步获取所有视频帧AVAssetImageGenerator.generateCGImagesAsynchronously
,然后迭代上述模式中的那些。这是非常占用内存的,因为我有两个视频的每一帧的图像。我可以将工作分块以避免内存崩溃,但总体而言,这种方法仍然相当慢,而且这种批处理复杂性并不比第一种方法好多少。
使用 并发获取每一帧AVAssetImageGenerator.copyCGImage(at: frame, actualTime: nil)
,但这并不比第一个选项快。
使用 anAVAssetReader
并使用copyNextSampleBuffer
--遍历每个帧,对上述任何选项都没有真正的改进。
我可能可以做一些事情来优化处理,但我认为它们并不能解决上面提到的基本问题。例如,我可能会降低视频质量,或者修剪它,因为某些视频在其帧内不可见,或者降低帧速率,但如果可能,我宁愿避免这些。
在这一点上,我想我可能不得不使用 Metal。有什么建议么?
我走了一条不同的路线,这似乎可以解决问题。您可以在此repo 中检查工作版本,但大部分代码如下。代码不是生产就绪/非常干净,只是一个概念证明——因此使用!
、长函数、重复等。
func overlapVideos() {
let composition = AVMutableComposition()
// make main video instruction
let mainInstruction = AVMutableVideoCompositionInstruction()
guard let pathUrl = Bundle.main.url(forResource: "IMG_7165", withExtension: "MOV") else {
assertionFailure()
return
}
// make first video track and add to composition
let firstAsset = AVAsset(url: pathUrl)
// timeframe will match first video for this example
mainInstruction.timeRange = CMTimeRangeMake(start: .zero, duration: firstAsset.duration)
guard let firstTrack = composition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid) else {
assertionFailure()
return
}
try! firstTrack.insertTimeRange(CMTimeRangeMake(start: .zero, duration: firstAsset.duration), of: firstAsset.tracks(withMediaType: .video)[0], at: .zero)
// add layer instruction for first video
let firstVideoLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: firstTrack)
let firstMove = CGAffineTransform(translationX: 500, y: 400)
let firstScale = CGAffineTransform(scaleX: 0.1, y: 0.1)
firstVideoLayerInstruction.setTransform(firstMove.concatenating(firstScale), at: .zero)
mainInstruction.layerInstructions.append(firstVideoLayerInstruction)
// make second video track and add to composition
let secondAsset = AVAsset(url: pathUrl)
guard let secondTrack = composition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid) else {
assertionFailure()
return
}
try! secondTrack.insertTimeRange(CMTimeRangeMake(start: .zero, duration: secondAsset.duration), of: secondAsset.tracks(withMediaType: .video)[0], at: .zero)
// add layer instruction for second video
let secondVideoLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: secondTrack)
let secondMove = CGAffineTransform(translationX: -100, y: -100)
let secondScale = CGAffineTransform(scaleX: 0.1, y: 0.1)
secondVideoLayerInstruction.setTransform(secondMove.concatenating(secondScale), at: .zero)
mainInstruction.layerInstructions.append(secondVideoLayerInstruction)
// make third video track and add to composition
let thirdAsset = AVAsset(url: pathUrl)
guard let thirdTrack = composition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid) else {
assertionFailure()
return
}
try! thirdTrack.insertTimeRange(CMTimeRangeMake(start: .zero, duration: thirdAsset.duration), of: thirdAsset.tracks(withMediaType: .video)[0], at: .zero)
// add layer instruction for third video
let thirdVideoLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: thirdTrack)
let thirdMove = CGAffineTransform(translationX: 0, y: 1000)
let thirdScale = CGAffineTransform(scaleX: 0.1, y: 0.1)
thirdVideoLayerInstruction.setTransform(thirdMove.concatenating(thirdScale), at: .zero)
mainInstruction.layerInstructions.append(thirdVideoLayerInstruction)
// make video composition
let videoComposition = AVMutableVideoComposition()
videoComposition.instructions = [mainInstruction]
videoComposition.frameDuration = CMTimeMake(value: 1, timescale: 30)
videoComposition.renderSize = CGSize(width: 640, height: 480)
// export
let searchPaths = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)
let documentDirectory = searchPaths[0]
let filePath = documentDirectory.appending("output.mov")
let outputUrl = URL(fileURLWithPath: filePath)
let fileManager = FileManager.default
if fileManager.fileExists(atPath: filePath) {
try! fileManager.removeItem(at: outputUrl)
}
guard let exporter = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetHighestQuality) else {
assertionFailure()
return
}
exporter.videoComposition = videoComposition
exporter.outputFileType = .mov
exporter.outputURL = outputUrl
exporter.exportAsynchronously {
DispatchQueue.main.async { [weak self] in
// play video, etc.
}
}
}
本文收集自互联网,转载请注明来源。
如有侵权,请联系[email protected] 删除。
我来说两句