ios – Imaginative and prescient CoreML Object Detection Full Display screen Panorama
[ad_1]
How can I get my VNCoreMLRequest
to detect objects showing wherever throughout the fullscreen view?
I’m at the moment utilizing the Apple pattern venture for object recognition in breakfast meals:BreakfastFinder. The mannequin and recognition works nicely, and usually provides the proper bounding field (visible) of the objects it’s detecting / discovering.
The difficulty arises right here with altering the orientation of this detection.
In portrait mode, the default orientation for this venture, the mannequin identifies objects nicely within the full bounds of the view. Naturally, given the properties of the SDK objects, rotating the digital camera causes poor efficiency and visible identification.
In panorama mode, the mannequin behaves unusually. The window / space of which the mannequin is detecting objects is not the total view. As a substitute, it’s (what looks like) the identical facet ratio of the cellphone itself, however centered and in portrait mode. I’ve a screenshot under displaying roughly the place the mannequin stops detecting objects when in panorama:
The blue field with crimson define is roughly the place the detection stops. It behaves unusually, however persistently doesn’t discover any objects outdoors this approbate view / close to the left or proper edge. Nevertheless, the highest and backside edges close to the middle detect with none situation.
regionOfInterest
I’ve adjusted this to be the utmost: x: 0, y: 0, width: 1, peak: 1
. This made no distinction
imageCropAndScaleOption
That is the one setting that permits detection within the full display screen, nonetheless, the efficiency turned noticeably worse, and that is probably not an allowable con.
Is there a scale / dimension setting someplace on this course of that I’ve not set correctly? Or maybe a mode I’m not utilizing. Any assist could be most appreciated. Under is my detection controller:
ViewController.swift
// All unchanged from the obtain in Apples folder
" "
session.sessionPreset = .hd1920x1080 // Mannequin picture dimension is smaller.
...
previewLayer.connection?.videoOrientation = .landscapeRight
" "
VisionObjectRecognitionViewController
@discardableResult
func setupVision() -> NSError? {
// Setup Imaginative and prescient elements
let error: NSError! = nil
guard let modelURL = Bundle.principal.url(forResource: "ObjectDetector", withExtension: "mlmodelc") else {
return NSError(area: "VisionObjectRecognitionViewController", code: -1, userInfo: [NSLocalizedDescriptionKey: "Model file is missing"])
}
do {
let visionModel = strive VNCoreMLModel(for: MLModel(contentsOf: modelURL))
let objectRecognition = VNCoreMLRequest(mannequin: visionModel, completionHandler: { (request, error) in
DispatchQueue.principal.async(execute: {
// carry out all of the UI updates on the principle queue
if let outcomes = request.outcomes {
self.drawVisionRequestResults(outcomes)
}
})
})
// These are the one properties that affect the detection space
objectRecognition.regionOfInterest = CGRect(x: 0, y: 0, width: 1, peak: 1)
objectRecognition.imageCropAndScaleOption = VNImageCropAndScaleOption.scaleFit
self.requests = [objectRecognition]
} catch let error as NSError {
print("Mannequin loading went fallacious: (error)")
}
return error
}
[ad_2]