I’m trying to do image detection using OpenCV and my ZedMini to place game objects in front of specific images, like OpenCV: Basic concepts of the homography explained with code first example of “How the homography transformation can be useful?”.
I used feature detection with the data from the ZEDToOpenCVRetriever script to find my image which works.
But i’m having an hard time finding a way to get the world pose of the image so i can place my object.
I think using homography (with perspectiveTranform()) is the way to go, but i don’t understand quite well how to use the homography matrix to find my image position.
This looks a lot like the Simple Marker Placement scene under
Assets/ZED/Examples/OpenCV ArUco Detection/Scenes/. You will find an example applied to ArUco markers, but you might be able to transpose it for your particular case.
Don’t hesitate if you have more questions.
Yeah, it’s the first thing i looked into but the ArUco example uses aruco_Aruco_estimatePoseSingleMarkers_12() to find the pose of the marker in the real world and I can’t see how it works.
Sorry, I can’t really help with this.
You may found more information directly in openCV source code, and by examining the parameters used in the estimatePoseSingleMarkers call.
I hope that helps a bit, do not hesitate if you have more specific questions.