i am currently using Nvidia’s robotics simulation tool Isaac sim to conduct some tests. I’ve replicated a real scenario within Isaac sim, i.e. i modeled my scene, robot and sensors (in this case a stereo camera). In my real scenario i utilized the ZED 2 stereo camera for scans to capture raw RGB and depth images. The problem now is, to do this exact same procedure but inside my digital twin in Isaac sim. My question therefore is, if it’s possible to use the ZED ROS 2 wrapper node as a “black box” to publish my virtual images from my virtual stereo camera to receive the corresponding depth images. Or if it’s possible to connect my virtual stereo camera to the wrapper node.
The idea is to create synthetic data with Isaac sim and check how well i can replicate my real world scenario.
I hope an answer can be found for this.
Many thanks in advance
Hey @Myzhar, thank you for the welcome and thanks for the quick reply.
It’s a shame, that sadly there is not solution for this yet. I’ll have to look for other ways of simulating the depth perception of my ZED 2 stereo camera then.
Regarding the upcoming big news, how soon can i expact that to be, if you are allowed to share? Im currently writing a thesis and my project is what im writing about, thats why im asking.