有没有人尝试过如何在ios 11中使用vision api(VNHomographicImageRegistrationRequest)?
问题描述:
我正在研究与iOS11的Vision SDK有关的货币识别问题。 我在处理VNHomographicImageRegistrationRequest
时遇到问题,它确定了对齐两个图像内容所需的透视扭曲矩阵。但我找不到如何发送两个图像参数到这个API,任何人都可以帮助我?有没有人尝试过如何在ios 11中使用vision api(VNHomographicImageRegistrationRequest)?
答
苹果的愿景框架流总是相同的:请求 - >处理程序 - >观察
例子:
// referenceAsset & asset2 can be:
// CGImage - CIImage - URL - Data - CVPixelBuffer
// Check initializers for more info
let request = VNHomographicImageRegistrationRequest(targetedCGImage: asset2, options: [:])
let handler = VNSequenceRequestHandler()
try! handler.perform([request], on: referenceAsset)
if let results = request.results as? [VNImageHomographicAlignmentObservation] {
print("Perspective warp found: \(results.count)")
results.forEach { observation in
// A matrix with 3 rows and 3 columns.
print(observation.warpTransform)
}
}
答
```
- (matrix_float3x3)predictWithVisionFromImage:(UIImage的) imageTarget toReferenceImage:(UIImage的)imageRefer {
UIImage *scaledImageTarget = [imageTarget scaleToSize:CGSizeMake(224, 224)];
CVPixelBufferRef bufferTarget = [imageTarget pixelBufferFromCGImage:scaledImageTarget];
UIImage *scaledImageRefer = [imageRefer scaleToSize:CGSizeMake(224, 224)];
CVPixelBufferRef bufferRefer = [imageRefer pixelBufferFromCGImage:scaledImageRefer];
VNHomographicImageRegistrationRequest* request = [[VNHomographicImageRegistrationRequest alloc]initWithTargetedCVPixelBuffer:bufferTarget completionHandler:nil];
VNHomographicImageRegistrationRequest* imageRequest = (VNHomographicImageRegistrationRequest*)request;
VNImageRequestHandler* handler = [[VNImageRequestHandler alloc]initWithCVPixelBuffer:bufferRefer options:@{}];
[handler performRequests:@[imageRequest] error:nil];
NSArray* resultsArr = imageRequest.results;
VNImageHomographicAlignmentObservation* firstObservation = [resultsArr firstObject];
return firstObservation.warpTransform;
}
```
YES,肯定就是。我将按照以下补充Objective-C版本 –