Coordinate system orientation and gravity

Hi all,

I know that there are some posts on this, but i am still heavily confused.
I have a ZED2 on the TCP of a robot, and I’m using the depth information to build an environmental model. Apparently, however, the coordinate system does not change in the same way the TCPs orientation does. Given that there are posts on alignment of the y-axis with gravity, my question is the following:

How can i ensure that the coordinate system is, whatever the actual gravitational direction may be, always aligned with the cameras orientation itself. E.g., if i have chosen the “IMAGE” coordinate system, that “z” is always pointing out of the lense, with “y” being perpendicular to the upper side of the camera?

I have used the following initialization (python) so far:

import pyzed.sl as sl
from zed_camera.camera import *

init_params = sl.InitParameters()
init_params.camera_resolution = sl.RESOLUTION.HD720
init_params.coordinate_units = sl.UNIT.METER
init_params.coordinate_system = sl.COORDINATE_SYSTEM.IMAGE
init_params.depth_stabilization = False
camera = Zed_Camera(init_params)

That does not really work, see the following image showing the raw point cloud w/o any correction by my side. Each point cloud is record for a different robot orientation. The camera is rotated (imagine the camera is hold in your hand, and the wrist is rotated) with angles of blue = 0°, red = 65°, yellow = -48°. As visible, the first two point clouds are apparently aligned internally, i.e. the coordinate system changes. The third, and all subsequent ones, are not.

There might be an error on my side obviously; any help of unraveling this would be highly appreciated!

1 Like

Hi @bblank,

Welcome to the Stereolabs forums! :slight_smile:

By default, the ZED SDK corrects the orientation of the camera using the IMU and aligns it with gravity.
This behavior can be disabled, and this is probably what you are looking for.

You can set this parameter from the PositionalTrackingParameters: set_gravity_as_origin to false.

Hi @mattrouss,

Thank you for the quick response!
Just to be sure, i have some additional questions:

Will disabling depth_stabilization also disable positional_tracking? (The documentation indicates this)
Will disabling positional tracking itself not disable the orientation correction?
If I set set_gravity_as_origin to false, will i actually get the depth coordinates w.r.t the optical centre/camera orientation, or will there be some additional translational correction given that i move the camera throughout space?
Would this then be the correct initialization code snippet?
tracking_parameters = sl.PositionalTrackingParameters()
tracking_parameters.set_gravity_as_origin = False
err = zed.enable_positional_tracking(tracking_parameters)

Hi @bblank,

Disabling depth stabilization will not disable positional tracking.
Positional tracking is required to have orientation and location estimation of the camera.

I think I may have misunderstood your issue initially. If I understand correctly, you would like to save your point clouds with coordinates in a global “WORLD” reference frame and have the point clouds be rotated with the real movements of the camera.

To perform this, you were correct to enable the positional tracking module, which enables global positioning estimation of the camera.

What you are missing is to retrieve the point cloud data in the correct reference frame, which can be changed here in the RuntimeParameters: measure3D_reference_frame, please set it to WORLD. This will apply the movement of the camera to the point cloud.

The set_gravity_origin only fixes 2 orientation axis of the camera to orient it correctly in the world at startup. If set to false, the camera will have an identity orientation when you start the program.

Hi @mattrouss,

Thanks for the clarification!

Not quite, I am sorry for not being clear on this. What I want are the depth coordinates w.r.t. the current position/orientation of the camera. I don’t want any alignment with gravity/previous positions or any consideration of the camera movement, this is all done by me.

In this case, you should not enable positional tracking at all, and you should have the correct behavior.

If the point clouds in the screenshot were generated with positional tracking turned off, can you please reformulate why the results given by your screenshot are not what you are expecting?

Let me rephrase the problem:

Setup:
I have ZED 2 which is connected to a Jetson Xavier with Ubuntu 18 and ZED SDK 3.5. The ZED is mounted on my robot, taking pictures of the environment at various poses. With the axis enconders, I know the camera’s pose in my world coordinates with high certainty.

My Problem:
With the camera’s pose, I am transforming the x-y-z pointclouds of each image to the world coordinates. The above image shows three such transformed point clouds, which here mainly represented the floor around my robot. If everything is correct, all point clouds would more or less overlap, but in the picture above the yellow one intersects the other two, showing that there is an issue. I have other examples of this.

My question:
Given that I am certain of my transformation (it was used in other applications), I conclude that the x-y-z-data is not always returned in the local camera coordinates. Instead, sometimes the format fits to my transformations and matches the ground truth, sometimes it does not, and I can’t see the reason why. How can I ensure that I always just get the relative coordinates in the same frame (e.g. right-handed y up)? Are there possible issues with the old SDK version?

What I can suggest would be to take a look at our depth-sensing sample from our examples repo: zed-sdk/depth sensing/depth sensing/cpp at master · stereolabs/zed-sdk · GitHub (for version 3.5 of the ZED SDK it will be available under /usr/local/zed/samples/depth-sensing/cpp)

This sample displays the live point cloud in an OpenGL window, and does not use the positional tracking at all. With this you can validate that the retrieveMeasure method to retrieve the point cloud produces the correct behavior.