ios - Simulate AVLayerVideoGravityResizeAspectFill: crop and center video to mimic preview without losing sharpness -


based on so post, code below rotates, centers, , crops video captured live user.

the capture session uses avcapturesessionpresethigh preset value, , preview layer uses avlayervideogravityresizeaspectfill video gravity. preview extremely sharp.

the exported video, however, not sharp, ostensibly because scaling 1920x1080 resolution camera on 5s 320x568 (target size exported video) introduces fuzziness throwing away pixels?

assuming there no way scale 1920x1080 320x568 without fuzziness, question becomes: how mimic sharpness of preview layer?

somehow apple using algorithm convert 1920x1080 video crisp-looking preview frame of 320x568.

is there way mimic either avassetwriter or avassetexportsession?

func cropvideo() {     // set start time     let starttime = nsdate().timeintervalsince1970      // create main composition & tracks     let maincomposition = avmutablecomposition()     let compositionvideotrack = maincomposition.addmutabletrackwithmediatype(avmediatypevideo, preferredtrackid: cmpersistenttrackid(kcmpersistenttrackid_invalid))     let compositionaudiotrack = maincomposition.addmutabletrackwithmediatype(avmediatypeaudio, preferredtrackid: cmpersistenttrackid(kcmpersistenttrackid_invalid))      // source video & audio tracks     let videopath = getfilepath(curslice!.getcaptureurl())     let videourl = nsurl(fileurlwithpath: videopath)     let videoasset = avurlasset(url: videourl, options: nil)     let sourcevideotrack = videoasset.trackswithmediatype(avmediatypevideo)[0]     let sourceaudiotrack = videoasset.trackswithmediatype(avmediatypeaudio)[0]     let videosize = sourcevideotrack.naturalsize      // rounded time video     let roundeddur = floor(curslice!.getdur() * 100) / 100     let videodur = cmtimemakewithseconds(roundeddur, 100)      // add source tracks composition     {         try compositionvideotrack.inserttimerange(cmtimerangemake(kcmtimezero, videodur), oftrack: sourcevideotrack, attime: kcmtimezero)         try compositionaudiotrack.inserttimerange(cmtimerangemake(kcmtimezero, videodur), oftrack: sourceaudiotrack, attime: kcmtimezero)     } catch {         print("error inserttimerange while exporting video: \(error)")     }      // create video composition     // -- set video frame     let outputsize = view.bounds.size     let videocomposition = avmutablevideocomposition()     print("video composition duration: \(cmtimegetseconds(maincomposition.duration))")      // -- set parent layer     let parentlayer = calayer()     parentlayer.frame = cgrectmake(0, 0, outputsize.width, outputsize.height)     parentlayer.contentsgravity = kcagravityresizeaspectfill      // -- set composition props     videocomposition.rendersize = cgsize(width: outputsize.width, height: outputsize.height)     videocomposition.frameduration = cmtimemake(1, int32(framerate))      // -- create video composition instruction     let instruction = avmutablevideocompositioninstruction()     instruction.timerange = cmtimerangemake(kcmtimezero, videodur)      // -- use layer instruction match video output size, mimicking avlayervideogravityresizeaspectfill     let videolayerinstruction = avmutablevideocompositionlayerinstruction(assettrack: compositionvideotrack)     let videotransform = getresizeaspectfilltransform(videosize, outputsize: outputsize)     videolayerinstruction.settransform(videotransform, attime: kcmtimezero)      // -- add layer instruction     instruction.layerinstructions = [videolayerinstruction]     videocomposition.instructions = [instruction]      // -- create video layer     let videolayer = calayer()     videolayer.frame = parentlayer.frame      // -- add sublayers parent layer     parentlayer.addsublayer(videolayer)      // -- set animation tool     videocomposition.animationtool = avvideocompositioncoreanimationtool(postprocessingasvideolayer: videolayer, inlayer: parentlayer)      // create exporter     let outputurl = getfilepath(getuniquefilename(gmp4file))     let exporter = avassetexportsession(asset: maincomposition, presetname: avassetexportpresethighestquality)!     exporter.outputurl = nsurl(fileurlwithpath: outputurl)     exporter.outputfiletype = avfiletypempeg4     exporter.videocomposition = videocomposition     exporter.shouldoptimizefornetworkuse = true     exporter.canperformmultiplepassesoversourcemediadata = true      // export video     exporter.exportasynchronouslywithcompletionhandler({         // log status         let asset = avasset(url: exporter.outputurl!)         print("exported slice video. tracks: \(asset.tracks.count). duration: \(cmtimegetseconds(asset.duration)). size: \(exporter.estimatedoutputfilelength). status: \(getexportstatus(exporter)). output url: \(exporter.outputurl!). export time: \( nsdate().timeintervalsince1970 - starttime).")          // tell delegate         //delegate.didendexport(exporter)         self.curslice!.setoutputurl(exporter.outputurl!.lastpathcomponent!)         guser.save()     }) }   // returns transform, mimicking avlayervideogravityresizeaspectfill, converts video of <inputsize> 1 of <outputsize> private func getresizeaspectfilltransform(videosize: cgsize, outputsize: cgsize) -> cgaffinetransform {     // compute ratios between video & output sizes     let widthratio = outputsize.width / videosize.width     let heightratio = outputsize.height / videosize.height      // set scale larger of 2 ratios since goal fill output bounds     let scale = widthratio >= heightratio ? widthratio : heightratio      // compute video size after scaling     let newwidth = videosize.width * scale     let newheight = videosize.height * scale      // compute translation required center image after scaling     // -- assumes coreanimationtool places video frame @ (0, 0). because scale transform applied first, must adjust     // each translation point scale factor.     let translatex = (outputsize.width - newwidth) / 2 / scale     let translatey = (outputsize.height - newheight) / 2 / scale      // set transform resize video while retaining aspect ratio     let resizetransform = cgaffinetransformmakescale(scale, scale)      // apply translation & create final transform     let finaltransform = cgaffinetransformtranslate(resizetransform, translatex, translatey)      // return final transform     return finaltransform } 

320x568 video taken tim's code:

enter image description here

640x1136 video taken tim's code: enter image description here

try this. start new single view project in swift, replace viewcontroller code , should go!

i've set previewlayer different size output, change @ top of file.

i added basic orientation support. outputs different sizes landscape vs. portrait. can specify whatever video size dimensions in here , should work fine.

checkout videosettings dictionary (line 278ish) codec , sizes of output file. can add other settings in here deal keyframeintervals etc. tweak outputsize.

i added recording image show when it's recording (tap starts, tap ends), you'll need add asset assets.xcassets called recording (or comment out line 106 trys load it).

that's pretty it. luck!

oh, it's dumping video project directory, you'll need go window / devices , download container see video easily. in todo there's section hook in , copy file photolibrary (makes testing way easier).

import uikit import avfoundation  class viewcontroller: uiviewcontroller, avcapturevideodataoutputsamplebufferdelegate, avcaptureaudiodataoutputsamplebufferdelegate {  let capture_size_landscape: cgsize = cgsizemake(1280, 720) let capture_size_portrait: cgsize = cgsizemake(720, 1280)  var recordingimage : uiimageview = uiimageview()  var previewlayer : avcapturevideopreviewlayer?  var audioqueue : dispatch_queue_t? var videoqueue : dispatch_queue_t?  let capturesession = avcapturesession() var assetwriter : avassetwriter? var assetwriterinputcamera : avassetwriterinput? var assetwriterinputaudio : avassetwriterinput? var outputconnection: avcaptureconnection?  var capturedeviceback : avcapturedevice? var capturedevicefront : avcapturedevice? var capturedevicemic : avcapturedevice? var sessionsetupdone: bool = false  var isrecordingstarted = false //var recordingstartedtime = kcmtimezero var videooutputurl : nsurl?  var capturesize: cgsize = cgsizemake(1280, 720) var previewframe: cgrect = cgrectmake(0, 0, 180, 360)  var capturedevicetrigger = true var capturedevice: avcapturedevice? {     {         return capturedevicetrigger ? capturedevicefront : capturedeviceback     } }  override func supportedinterfaceorientations() -> uiinterfaceorientationmask {     return uiinterfaceorientationmask.allbutupsidedown }  override func shouldautorotate() -> bool {     if isrecordingstarted {         return false     }      if uidevice.currentdevice().orientation == uideviceorientation.portraitupsidedown {         return false     }      if let camerapreview = self.previewlayer {         if let connection = camerapreview.connection {             if connection.supportsvideoorientation {                 switch uidevice.currentdevice().orientation {                 case .landscapeleft:                     connection.videoorientation = .landscaperight                 case .landscaperight:                     connection.videoorientation = .landscapeleft                 case .portrait:                     connection.videoorientation = .portrait                 case .faceup:                     return false                 case .facedown:                     return false                 default:                     break                 }             }         }     }      return true }  override func viewdidload() {     super.viewdidload()      setupviewcontrols()      //self.recordingstartedtime = kcmtimezero      // setup capture session related logic     videoqueue = dispatch_queue_create("video_write_queue", dispatch_queue_serial)     audioqueue = dispatch_queue_create("audio_write_queue", dispatch_queue_serial)      setupcapturedevices()     pre_start() }  //mark: ui methods func setupviewcontrols() {      // todo: have image (red circle) in assets.xcassets. replace following own image     recordingimage.frame = cgrect(x: 0, y: 0, width: 50, height: 50)     recordingimage.image = uiimage(named: "recording")     recordingimage.hidden = true     self.view.addsubview(recordingimage)       // setup tap record , stop     let tapgesture = uitapgesturerecognizer(target: self, action: "didgettapped:")     tapgesture.numberoftapsrequired = 1     self.view.addgesturerecognizer(tapgesture)  }    func didgettapped(selector: uitapgesturerecognizer) {     if self.isrecordingstarted {         self.view.gesturerecognizers![0].enabled = false         recordingimage.hidden = true          self.stoprecording()     } else {         recordingimage.hidden = false         self.startrecording()     }      self.isrecordingstarted = !self.isrecordingstarted }  func switchcamera(selector: uibutton) {     self.capturedevicetrigger = !self.capturedevicetrigger      pre_start() }  //mark: video logic func setupcapturedevices() {     let devices = avcapturedevice.devices()      device in devices {         if  device.hasmediatype(avmediatypevideo) {             if device.position == avcapturedeviceposition.front {                 capturedevicefront = device as? avcapturedevice                 nslog("video controller: setup. front camera found")             }             if device.position == avcapturedeviceposition.back {                 capturedeviceback = device as? avcapturedevice                 nslog("video controller: setup. camera found")             }         }          if device.hasmediatype(avmediatypeaudio) {             capturedevicemic = device as? avcapturedevice             nslog("video controller: setup. audio device found")         }     } }  func alertpermission() {     let permissionalert = uialertcontroller(title: "no permission", message: "please allow access camera , microphone", preferredstyle: uialertcontrollerstyle.alert)     permissionalert.addaction(uialertaction(title: "go settings", style: .default, handler: { (action: uialertaction!) in         print("video controller: permission camera/mic denied. going settings")         uiapplication.sharedapplication().openurl(nsurl(string: uiapplicationopensettingsurlstring)!)         print(uiapplicationopensettingsurlstring)     }))     presentviewcontroller(permissionalert, animated: true, completion: nil) }  func pre_start() {     nslog("video controller: pre_start")     let videopermission = avcapturedevice.authorizationstatusformediatype(avmediatypevideo)     let audiopermission = avcapturedevice.authorizationstatusformediatype(avmediatypeaudio)     if  (videopermission ==  avauthorizationstatus.denied) || (audiopermission ==  avauthorizationstatus.denied) {         self.alertpermission()         pre_start()         return     }      if (videopermission == avauthorizationstatus.authorized) {         self.start()         return     }      avcapturedevice.requestaccessformediatype(avmediatypevideo, completionhandler: { (granted :bool) -> void in         self.pre_start()     }) }  func start() {     nslog("video controller: start")     if capturesession.running {         capturesession.beginconfiguration()          if let currentinput = capturesession.inputs[0] as? avcaptureinput {             capturesession.removeinput(currentinput)         }          {             try capturesession.addinput(avcapturedeviceinput(device: capturedevice))         } catch {             print("video controller: begin session. error adding video input device")         }          capturesession.commitconfiguration()         return     }      {         try capturesession.addinput(avcapturedeviceinput(device: capturedevice))         try capturesession.addinput(avcapturedeviceinput(device: capturedevicemic))     } catch {         print("video controller: start. error adding device: \(error)")     }      if let layer = avcapturevideopreviewlayer(session: capturesession) {         self.previewlayer = layer         layer.videogravity = avlayervideogravityresizeaspect          if let layerconnection = layer.connection {             if uidevice.currentdevice().orientation == .landscaperight {                 layerconnection.videoorientation = avcapturevideoorientation.landscapeleft             } else if uidevice.currentdevice().orientation == .landscapeleft {                 layerconnection.videoorientation = avcapturevideoorientation.landscaperight             } else if uidevice.currentdevice().orientation == .portrait {                 layerconnection.videoorientation = avcapturevideoorientation.portrait             }         }          // todo: set output size of preview layer here         layer.frame = previewframe         self.view.layer.insertsublayer(layer, atindex: 0)      }      let buffervideoqueue = dispatch_queue_create("sample buffer delegate", dispatch_queue_serial)     let videooutput = avcapturevideodataoutput()     videooutput.setsamplebufferdelegate(self, queue: buffervideoqueue)     capturesession.addoutput(videooutput)     if let connection = videooutput.connectionwithmediatype(avmediatypevideo) {         self.outputconnection = connection     }      let bufferaudioqueue = dispatch_queue_create("audio buffer delegate", dispatch_queue_serial)     let audiooutput = avcaptureaudiodataoutput()     audiooutput.setsamplebufferdelegate(self, queue: bufferaudioqueue)     capturesession.addoutput(audiooutput)      capturesession.startrunning() }  func getassetwriter() -> avassetwriter? {     nslog("video controller: getassetwriter")     let filemanager = nsfilemanager.defaultmanager()     let urls = filemanager.urlsfordirectory(.documentdirectory, indomains: .userdomainmask)     guard let documentdirectory: nsurl = urls.first else {         print("video controller: getassetwriter: documentdir error")         return nil     }      let local_video_name = nsuuid().uuidstring + ".mp4"     self.videooutputurl = documentdirectory.urlbyappendingpathcomponent(local_video_name)      guard let url = self.videooutputurl else {         return nil     }       self.assetwriter = try? avassetwriter(url: url, filetype: avfiletypempeg4)      guard let writer = self.assetwriter else {         return nil     }      let videosettings: [string : anyobject] = [         avvideocodeckey  : avvideocodech264,         avvideowidthkey  : capturesize.width,         avvideoheightkey : capturesize.height,     ]      assetwriterinputcamera = avassetwriterinput(mediatype: avmediatypevideo, outputsettings: videosettings)     assetwriterinputcamera?.expectsmediadatainrealtime = true     writer.addinput(assetwriterinputcamera!)      let audiosettings : [string : anyobject] = [         avformatidkey : nsinteger(kaudioformatmpeg4aac),         avnumberofchannelskey : 2,         avsampleratekey : nsnumber(double: 44100.0)     ]      assetwriterinputaudio = avassetwriterinput(mediatype: avmediatypeaudio, outputsettings: audiosettings)     assetwriterinputaudio?.expectsmediadatainrealtime = true     writer.addinput(assetwriterinputaudio!)      return writer }  func configurepreset() {     nslog("video controller: configurepreset")     if capturesession.cansetsessionpreset(avcapturesessionpreset1280x720) {         capturesession.sessionpreset = avcapturesessionpreset1280x720     } else {         capturesession.sessionpreset = avcapturesessionpreset1920x1080     } }  func startrecording() {     nslog("video controller: start recording")      capturesize = uideviceorientationislandscape(uidevice.currentdevice().orientation) ? capture_size_landscape : capture_size_portrait      if let connection = self.outputconnection {          if connection.supportsvideoorientation {              if uidevice.currentdevice().orientation == .landscaperight {                 connection.videoorientation = avcapturevideoorientation.landscapeleft                 nslog("orientation: right")             } else if uidevice.currentdevice().orientation == .landscapeleft {                 connection.videoorientation = avcapturevideoorientation.landscaperight                 nslog("orientation: left")             } else {                 connection.videoorientation = avcapturevideoorientation.portrait                 nslog("orientation: portrait")             }         }     }      if let writer = getassetwriter() {         self.assetwriter = writer          let recordingclock = self.capturesession.masterclock         writer.startwriting()         writer.startsessionatsourcetime(cmclockgettime(recordingclock))     }  }  func stoprecording() {     nslog("video controller: stop recording")      if let writer = self.assetwriter {         writer.finishwritingwithcompletionhandler{void in             print("recording finished")             // todo: handle video file, copy temp directory etc.         }     } }  //mark: implementation avcapturevideodataoutputsamplebufferdelegate, avcaptureaudiodataoutputsamplebufferdelegate func captureoutput(captureoutput: avcaptureoutput!, didoutputsamplebuffer samplebuffer: cmsamplebuffer!, fromconnection connection: avcaptureconnection!) {      if !self.isrecordingstarted {         return     }      if let audio = self.assetwriterinputaudio connection.audiochannels.count > 0 && audio.readyformoremediadata {          dispatch_async(audioqueue!) {             audio.appendsamplebuffer(samplebuffer)         }         return     }      if let camera = self.assetwriterinputcamera camera.readyformoremediadata {         dispatch_async(videoqueue!) {             camera.appendsamplebuffer(samplebuffer)         }     } } } 

additional edit info

its seems our additional conversations in comments want reduce physical size of output video while keeping dimensions high can (to retain quality). remember, size position layer on screen points, not pixels. you're writing output file in pixels - it's not 1:1 comparison iphone screen reference units.

to reduce size of output file, have 2 easy options:

  1. reduce resolution - if go small, you'll lose quality when playing back, if when playing gets scaled again. try 640x360 or 720x480 output pixels.
  2. adjust compression settings. iphone has default settings typically produce higher quality (larger output file size) video.

replace video settings these options , see how go:

    let videosettings: [string : anyobject] = [         avvideocodeckey  : avvideocodech264,         avvideowidthkey  : capturesize.width,         avvideoheightkey : capturesize.height,         avvideocompressionpropertieskey : [             avvideoaveragebitratekey : 2000000,             avvideoprofilelevelkey : h264_main_4_1,             avvideomaxkeyframeintervalkey : 90,         ]     ] 

the avcompressionproperties tell avfoundation how compress video. lower bit rate, higher compression (and therefore better streams less disk space uses have lower quality). maxkeyframe interval how writes out uncompressed frame, setting higher (in our ~30 frames per second video 90 once every 1.5 seconds) reduces quality decreases size too. you'll find constants referenced here https://developer.apple.com/library/prerelease/ios/documentation/avfoundation/reference/avfoundation_constants/index.html#//apple_ref/doc/constant_group/video_settings


Comments

Popular posts from this blog

javascript - jQuery: Add class depending on URL in the best way -

caching - How to check if a url path exists in the service worker cache -

Redirect to a HTTPS version using .htaccess -