如何从AVCaptureSession生成的图像中获取确切的大小框架?
问题描述:
如何从AVCaptureSession生成的图像中获取确切的大小框架?
我正在研究一个捕获AVCaptureSession内部特定视图的应用程序,如上图所示。
我使用AVCaptureStillImageOutput
来捕获AVCaptureSession
中的图像。问题是我得到一个特定大小的图像,({2448,3264})。我的解决方案是将该图像转换为我的背景视图的同一帧,以便具有相同的坐标和帧。
使用imageWithImage
,我使用了我用于captureView的相同帧,并且一切都很顺利。 resizedImage结束为{768,1024},它与AVCaptureSession的大小相同。
从这里开始,基于这个坐标,我尝试使用CGImageCreateWithImageInRect
基于CaptureView的绿色视图框架来裁剪图像。
我得到的输出图像是关闭的。我的问题是有没有比CGImageCreateWithImageInRect
更好的方法来捕获我从AVCaptureSession中获取的图像所需的确切视图?有没有更好的方式去做我想要实现的东西?任何帮助将不胜感激。提前致谢!
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo]) {
videoConnection = connection;
//Handle orientation for video
if(videoConnection.supportsVideoOrientation)
{
if(captureVideoPreviewLayer.connection.videoOrientation == UIInterfaceOrientationLandscapeLeft){
videoConnection.videoOrientation = AVCaptureVideoOrientationLandscapeLeft;
}
if(captureVideoPreviewLayer.connection.videoOrientation == UIInterfaceOrientationLandscapeRight){
videoConnection.videoOrientation = AVCaptureVideoOrientationLandscapeRight;
}
if(captureVideoPreviewLayer.connection.videoOrientation == UIInterfaceOrientationPortrait){
videoConnection.videoOrientation = AVCaptureVideoOrientationPortrait;
}
}
break;
}
}
if (videoConnection) { break; }
}
NSLog(@"about to request a capture from: %@", stillImageOutput);
__weak typeof(self) weakSelf = self;
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
UIImage *resizedImage = [weakSelf imageWithImage:image scaledToSize:outputImageView.frame.size];
//Image view to test screenshot of AVCaptureSession
outputImageView.image = resizedImage;
//Screenshot of captureView frame (green view)
CGRect captureFrame = captureView.frame;
CGImageRef cropRef = CGImageCreateWithImageInRect(resizedImage.CGImage, captureFrame);
UIImage* cropImage = [UIImage imageWithCGImage:cropRef];
// Image view to test cropped image
sampleImageView.image = cropImage;
//Hide Indicator
[weakSelf hideActivityView];
}];
- (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
//UIGraphicsBeginImageContext(newSize);
// In next line, pass 0.0 to use the current device's pixel scaling factor (and thus account for Retina resolution).
// Pass 1.0 to force exact pixel size.
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
用于捕获图像的方法。
答
我解决了我的问题,通过将像素缩放比例更改为1.0,如评论所述。
- (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
//UIGraphicsBeginImageContext(newSize);
// In next line, pass 0.0 to use the current device's pixel scaling factor (and thus account for Retina resolution).
// Pass 1.0 to force exact pixel size.
UIGraphicsBeginImageContextWithOptions(newSize, NO, 1.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
没有尝试'' AVCaptureVideoPreviewLayer' –
@MikeAlter是的,它是当前设置为AVLayerVideoGravityResizeAspect的videoGravity'。使用AVCaptureStillImageOutput,改变视频重力不会改变我从AVCaptureSession获取的图像的大小。我希望能够将AVCaptureStillImageOutput的框架设置为特定的框架。这行代码“captureVideoPreviewLayer.frame = cameraView.frame;”设置AVCaptureVideoPreviewLayer没有帮助。 – Mochi