Hi!
I’m photographing an object on the floor, but the point cloud data I get is relative to the camera coordinate system, and how do I set the point cloud data to be relative to the coordinate system with the floor as the xy plane and the upward z-axis. Or, consider another approach, how to convert point cloud data relative to the camera coordinate system to relative to the floor coordinate system?
In this way, do you want to get the plane equation for the camera pose in the floor coordinate system, or the floor plane in the camera coordinate system?
In this way, do you want to get the plane equation for the camera pose in the floor coordinate system, or the floor plane in the camera coordinate system?
Hi @zore017
you can use the function findFloorPlane to retrieve the Transform of the plane with respect to the camera.
You invert it and you can set the new transform and parameter to initialize the position of the camera in enablePositionalTracking finally you can retrieve the point cloud in the WORLD frame instead of camera frame by setting the runtime parameter measure3D_reference_frame to sl::REFERENCE_FRAME::WORLD instead of the default sl::REFERENCE_FRAME::CAMERA.
Thanks for your reply!
I wondered how I could modify depth_sensing sample code to set the world coordinate system when I had the camera change relative to the ground。
Probably it’s because it’s a homogeneously colored white wall with no visual features to match and the stereo vision cannot correctly reconstruct the surface.
You can try to use sl.DEPTH_MODE.NEURAL or sl.DEPTH_MODE.QUALITY to obtain better results with plane extraction.
Or in other words, can it be set like this in the URL:
initial_position = sl.Transform()
# Set the initial position of the Camera Frame at 1m80 above the World Frame
initial_translation = sl.Translation()
initial_translation.init_vector(0, 180, 0)
initial_position.set_translation(initial_translation)
tracking_parameters.set_initial_world_transform(initial_position)
Is the translation set here a conversion from the camera’s coordinate system to the world’s coordinate system? Or is it a conversion from the world coordinate system to the camera coordinate system?
Thank for your reply!
For the following piece of code
initial_position = sl.Transform()
# Set the initial position of the Camera Frame at 1m80 above the World Frame
initial_translation = sl.Translation()
initial_translation.init_vector(0, 180, 0)
initial_position.set_translation(initial_translation)
tracking_parameters.set_initial_world_transform(initial_position)
Is the translation set here a conversion from the camera’s coordinate system to the world’s coordinate system? Or is it a conversion from the world coordinate system to the camera coordinate system?
This is the position of the camera with respect to a “WORLD” reference frame placed 180 units along the Y axis.
It is important to know if you are using millimeters, centimeters, or meters as measurement units and what is the COORDINATE SYSTEM that you are using to know where the Y-axis points to.
Thanks for replying!
Can you help me see if the following code has successfully obtained the point cloud data with the floor as the world coordinate system?
import sys
import pyzed.sl as sl
zed = sl.Camera()
# Set configuration parameters
init = sl.InitParameters()
init.coordinate_units = sl.UNIT.METER
init.coordinate_system = sl.COORDINATE_SYSTEM.RIGHT_HANDED_Y_UP
# If applicable, use the SVO given as parameter
# Otherwise use ZED live stream
if len(sys.argv) == 2:
filepath = sys.argv[1]
print("Reading SVO file: {0}".format(filepath))
init.set_from_svo_file(filepath)
# Open the camera
status = zed.open(init)
if status != sl.ERROR_CODE.SUCCESS:
print(repr(status))
exit(1)
pose = sl.Pose() # positional tracking data
plane = sl.Plane() # detected plane
# Enable positional tracking before starting spatial mapping
zed.enable_positional_tracking()
# Look for transformations between the ground plane frame and the camera frame
# Resets the tracking and aligns it with the ground plane frame
resetTrackingFloorFrame = sl.Transform()
find_plane_status = zed.find_floor_plane(plane, resetTrackingFloorFrame)
if find_plane_status == sl.ERROR_CODE.SUCCESS:
# Reset positional tracking to align it with the floor plane frame
zed.reset_positional_tracking(resetTrackingFloorFrame)
print("success!")
runtime_parameters = sl.RuntimeParameters()
runtime_parameters.measure3D_reference_frame = sl.REFERENCE_FRAME.WORLD
cloud = sl.Mat()
while zed.grab(runtime_parameters) == sl.ERROR_CODE.SUCCESS:
zed.retrieve_measure(cloud, sl.MEASURE.XYZRGBA)
cloud.write('Pointcloud.ply')
resetTrackingFloorFrame contains the pose of the floor with respect to the camera.
You must invert it to get the correct camera pose with respect to the floor.
Traceback (most recent call last):
File “D:/code/WAVE/plane_detection/plane_detection_oldversion.py”, line 150, in
main()
File “D:/code/WAVE/plane_detection/plane_detection_oldversion.py”, line 129, in main
resetTrackingFloorFrame_inverse.init_matrix(reset_tracking_floor_frame.inverse_mat())
TypeError: inverse_mat() takes exactly one argument (0 given)
Sorry, the code did not run successfully and the above error message appears.
And I also have a question, why is it difficult to detect the floor using the function find_plane_status = zed.find_floor_plane(plane, reset_tracking_floor_frame)?
I apologize, there seems to be an issue with the inverse_mat method in the Python API, it should be a static method but currently it is not. In the meantime, you can perform what you are looking for with:
resetTrackingFloorFrame.inverse() # Performs the inverse in place
resetTrackingFloorFrame_inverse = resetTrackingFloorFrame
What do you mean by it is “difficult”? To debug more easily, you can use the official plane_detection sample to check if the plane is indeed detected in your configuration. Pressing ‘space’ calls the zed.find_floor_plane method.