Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>Use <code>AVFoundation</code>. I would suggest grabbing frames with <code>AVCaptureVideoDataOutput</code>, then overlaying the captured frame with the watermark image, and finally writing captured and processed frames to a file user <code>AVAssetWriter</code>. </p> <p>Search around stack overflow, there are a ton of fantastic examples detailing how to do each of these things I have mentioned. I haven't seen any that give code examples for exactly the effect you would like, but you should be able to mix and match pretty easily.</p> <p>EDIT:</p> <p>Take a look at these links:</p> <p><a href="https://stackoverflow.com/questions/3331644/iphone-avcapturesession-capture-output-crashing-avcapturevideodataoutput">iPhone: AVCaptureSession capture output crashing (AVCaptureVideoDataOutput)</a> - this post might be helpful just by nature of containing relevant code.</p> <p><code>AVCaptureDataOutput</code> will return images as <code>CMSampleBufferRef</code>s. Convert them to <code>CGImageRef</code>s using this code:</p> <pre><code> - (CGImageRef) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data { CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); CGImageRef newImage = CGBitmapContextCreateImage(newContext); CGContextRelease(newContext); CGColorSpaceRelease(colorSpace); CVPixelBufferUnlockBaseAddress(imageBuffer,0); /* CVBufferRelease(imageBuffer); */ // do not call this! return newImage; } </code></pre> <p>From there you would convert to a UIImage, </p> <pre><code> UIImage *img = [UIImage imageWithCGImage:yourCGImage]; </code></pre> <p>Then use </p> <pre><code>[img drawInRect:CGRectMake(x,y,height,width)]; </code></pre> <p>to draw the frame to a context, draw a PNG of the watermark over it, and then add the processed images to your output video using <code>AVAssetWriter</code>. I would suggest adding them in real time so you're not filling up memory with tons of UIImages. </p> <p><a href="https://stackoverflow.com/questions/3741323/how-do-i-export-uiimage-array-as-a-movie">How do I export UIImage array as a movie?</a> - this post shows how to add the UIImages you have processed to a video for a given duration.</p> <p>This should get you well on your way to watermarking your videos. Remember to practice good memory management, because leaking images that are coming in at 20-30fps is a great way to crash the app.</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload