Advanced navigation Help with Generating Point Cloud Data with Real-World Coordinates Using ZED 2i and GNSS

I am new to the ZED SDK and currently working on a project to generate point cloud data with a real-world coordinate system. I am using a ZED 2i camera with ZED SDK 4.2.5 to capture data in SVO2 format using the ZED Explorer tool.

For GNSS data, I am using an Advanced Navigation Spatial GPS configured in GPIO NMEA format. I have installed gpsd services on Ubuntu, and while GNSS data is being displayed correctly, I am encountering an issue where a GNSS JSON file is not being generated.

Additionally, I noticed that the “epv” (estimated vertical position error) value is not present in the GPIO NMEA format.

Could you please suggest the correct approach to generate the GNSS JSON file and guide me on how to handle the missing “epv” value in this format?

Any advice or recommendations would be greatly appreciated!

Thank you!

@Myzhar @mattrouss Please Suggest

Hi @nitingarg,

Welcome to the Stereolabs forums :wave:

If you are using the latest GNSS recording sample from our public SDK samples here: zed-sdk/global localization/recording at master · stereolabs/zed-sdk · GitHub

This sample does not write the GNSS data explicitely as a separate JSON file, but stores the data inside the SVO2 using our custom data feature, allowing you to have both ZED and external data inside the same file, and also to have this data synchronized.

Here is the method that is being used:

You can then retrieve the GNSS data with the API as well, with this method:

To later use in another application.


Thanks for the reply @mattrouss

So far, here’s what we’ve achieved:

  1. Ingested GNSS data into SVO2 file using the ZED camera and gnss.
  2. Verified the data using the playback module, and everything looks correct.
  3. We are able to generate KML files (raw_gnsss.kml & fused_position.kml) from the svo2 file, and the geolocation data is correctly creating paths from the points, visible on the map which states the ingestion is correct.

Additionally, we tried creating a PointCloud using the ZEDfu tool, and while we are getting a PointCloud, it is not georeferenced.

Next, our goal is:

  • To create a georeferenced PointCloud from the SVO file using the GNSS data.

My question is:
How can we generate a georeferenced PointCloud from the SVO file? We would greatly appreciate any resources or steps that can guide us through this!

@mattrouss , kindly assist

@TanguyHardelin, Please suggest

Hi @nitingarg,
Based on my understanding of your issue, you want to use the ZED SDK to generate a geo-referenced point cloud. Is that correct?

Unfortunately, the ZEDfu tool does not currently support this feature. However, you can achieve this by writing some additional code. The process should be manageable. Here’s how you can do it:

  1. Retrieve the Point Cloud: Use the ZED SDK functions to obtain the point cloud in the camera coordinate frame.
  2. Convert to Geo-Referenced Coordinates: Utilize the GNSS-VIO calibration from the Fusion module to transform the point cloud into a geo-referenced coordinate system.

Here are some useful links to help you get started:

  • Point cloud sample: sample
  • Function for converting camera point to geo: Camera2Geo

Regards,
Tanguy

Hi @TanguyHardelin

Thanks for your reply.
modified_sample.py (12.8 KB)

You’re absolutely right! My goal is to generate a geo-referenced point cloud with real-world coordinates in UTM or ECEF.

Here’s my current workflow:

  • I have an SVO2 file that includes latitude, longitude, and altitude (EPSG:4326).
  • I can successfully generate a point cloud (PLY) in camera coordinates using the spatial mapping module or ZEDfu.
  • From the SVO2 file, I can extract a geo-referenced KML file, which opens correctly.
  • I have also applied the Fusion module, and it works as expected.

However, I’m still unsure how to write the transformed (geo-referenced) point cloud back to a PLY file in UTM or ECEF coordinates.

I’m attaching my Python script for review. Could you please guide me on the necessary steps to properly convert and save the geo-referenced point cloud?

Hi @nitingarg,

If I understand correctly, you’re asking how to write a geo-referenced point cloud into a PLY file. While this isn’t my area of expertise, I can offer some guidance.

Writing a geo-referenced point cloud to a PLY file shouldn’t differ much from writing a regular point cloud. You can write each point in x-y-z coordinates. However, due to the absolute coordinate system, the viewer used to read the PLY file might struggle with the large coordinate values, as it may expect smaller numeric values.

If this is the case, I suggest switching from ECEF (Earth-Centered, Earth-Fixed) coordinates to ENU (East, North, Up) coordinates, which will preserve lower numeric values.

Regarding your question about converting your point cloud from a ZED camera to a geo-coordinate system, you can use the Camera2Geo function. This will provide you with latitude and longitude coordinates. Then, you can use our converter to obtain ECEF coordinates.

Regards,
Tanguy

Dear @TanguyHardelin,
Thank you for your reply it was really helpful that you focus to use ENU
I understand the functionalities you have discussed here for Camera2Geo & Converter, Let me explain in depth first:
Steps:

  1. We have a svo2 file captured from global_localization/recording/recording.py module with GNSS ingested into it having GNSS key as “gnss_json”
  2. Instead of opening the camera directly we started the initiation of camera from svo2 file using the below snippet:
    file_gnss_position = sl.GeoPose
    py_translation = sl.Translation()
    pose_data = sl.Transform()
    text_translation = ""
    text_rotation = ""   

    init_params = sl.InitParameters(depth_mode=sl.DEPTH_MODE.ULTRA,
                                    coordinate_units=sl.UNIT.METER,
                                    coordinate_system=sl.COORDINATE_SYSTEM.IMAGE)

    init_params.set_from_svo_file(input_svo_file)

    # create the camera that will input the position from its odometry
    zed = sl.Camera()
    status = zed.open(init_params)
    if status != sl.ERROR_CODE.SUCCESS:
        print("Camera Open: " + repr(status) + ". Exit program.")
        exit()
    
    # Enable positional tracking:
    positional_init = zed.enable_positional_tracking()
    if positional_init != sl.ERROR_CODE.SUCCESS:
        print("[ZED][ERROR] Can't start tracking of camera : " + repr(status) + ". Exit program.")
        exit()

    # set up communication parameters and start publishing
    communication_parameters = sl.CommunicationParameters()
    communication_parameters.set_for_shared_memory()
    zed.start_publishing(communication_parameters)

Then we have started the Positional Tracking module by the following code:

    tracking_params.enable_imu_fusion = True
    tracking_params.set_gravity_as_origin = True
    err = zed.enable_positional_tracking(tracking_params)
    if (err != sl.ERROR_CODE.SUCCESS):
        print("Camera positional tracking: " + repr(status) + ". Exit program.")
        exit()
    camera_info = zed.get_camera_information()

    # step 2
    # init the fusion module that will input both the camera and the GPS
    fusion = sl.Fusion()
    init_fusion_parameters = sl.InitFusionParameters()
    init_fusion_parameters.coordinate_system = sl.COORDINATE_SYSTEM.IMAGE
    init_fusion_parameters.coordinate_units = sl.UNIT.METER
    init_fusion_parameters.output_performance_metrics = True
    init_fusion_parameters.verbose = True
    init_fusion_parameters.timeout_period_number = 66
    fusion.init(init_fusion_parameters)

    # Enable positional tracking for Fusion object
    positional_tracking_fusion_parameters = sl.PositionalTrackingFusionParameters()
    positional_tracking_fusion_parameters.enable_GNSS_fusion = True
    gnss_calibration_parameters = sl.GNSSCalibrationParameters()
    gnss_calibration_parameters.target_yaw_uncertainty = 1e-2
    gnss_calibration_parameters.enable_translation_uncertainty_target = True
    gnss_calibration_parameters.target_translation_uncertainty = 15e-2
    gnss_calibration_parameters.enable_reinitialization = True
    gnss_calibration_parameters.gnss_vio_reinit_threshold = 5
    gnss_calibration_parameters.gnss_antenna_position = np.asarray([0.01,.01,0.01])
    positional_tracking_fusion_parameters.gnss_calibration_parameters = gnss_calibration_parameters
    fusion.enable_positionnal_tracking(positional_tracking_fusion_parameters)
    
    uuid = sl.CameraIdentifier(camera_info.serial_number)
    #Subscribe fusion to camera
    print(f"Subscribing to CAMERA Seial : {uuid.serial_number} with  {communication_parameters.comm_type}") 
    status = fusion.subscribe(uuid, communication_parameters, sl.Transform(0,0,0)) #Initial Ref to 0
    if status != sl.FUSION_ERROR_CODE.SUCCESS:
        print("Failed to subscribe to", uuid.serial_number, status)
        exit(1)

    print(f"GNSS Calibration Status : {fusion.get_geo_tracking_calibration()}" )```



Along with these we have accomplised fusion object, now the question is that How can I get FusedPointCloud data from fusion object in a way that it can be exported to a ply or object file.
Consider to check the script inline with this message that will help us understand the situation better.

Also, you can find the inline script here : 
[modified_sample.py|attachment](upload://u9qQQ1xy3dDqdImsVEx3g6RCa3J.py) (14.9 KB)

TIA

It appears that the upload didn’t go through correctly. :sweat_smile:

Regarding your issue, you might want to use the retrieveSpatialMapAsync function from the fusion object. If this function doesn’t meet your needs, you can still use the retrieveMeasure function from the Camera object and then use the Camera2Geo function from the fusion object to convert it to the Geo world. This should work as expected.

Regards,
Tanguy

Hi @TanguyHardelin,

Apologies for not uploading the file properly earlier. :sweat_smile:

Regarding your suggestion, I couldn’t find the retrieve_spatial_map_async function in the Fusion object in Python. Could you confirm if this function is available in the latest ZED SDK (4.2.5) in python api?

As for using the retrieveMeasure function from the Camera object and then applying Camera2Geo from the Fusion object, could you clarify the recommended workflow? Here’s my understanding:

point_cloud = sl.Mat() zed.retrieve_measure(point_cloud, sl.MEASURE.XYZRGBA)
then how it can be used to write my desired result.

modified_sample.py (16.1 KB)

Also please find my updated python script.

Hi @nitingarg,

It appears that this function is not currently available in our Python wrapper. I will report this issue, and it should be included in the next release. In the meantime, you can use the function from the Camera class, as it should meet your needs.

Regarding the workflow, it is quite straightforward. Once you obtain your point cloud referenced to your current camera position, you can call Camera2Geo for each point in it. Here are the steps to follow for each point in the point cloud:

  1. Construct a transform and set it as the identity.
  2. Set the translation of this transform with the position of the point.
  3. Call Camera2Geo with this transform as input.

Using this method, you should be able to reference every point in your point cloud in ENU (using the pose_data attribute of GeoPose) or in LatLong (using the latlng_coordinates attribute of GeoPose), which can then be converted to ECEF using our converter.

I hope this answers your question.

Regards, Tanguy

Thanks for clearing the points,
I just want to know if is there any way in the Converter to directly save the transformed coordinates in a ply file format of we have to write it as a text file.

“According to your reply Camera2Geo function takes the pose object and returns the GeoPose object”. My question is, how can I create a pose object for each point?

A prompt answer will be really appreciated.

I think you forget to have a look at the previously attached program file, it would be great if you review this for me and point out where I am lacking.

TIA

Hi @adujardin @Myzhar @mattrouss @TanguyHardelin @obraun-sl

Any of you guys please help me out the only way find out to get this escalated by tagging others to this thread, I have tried almost all the possible ways and suggestions that the support member have provided but nothing works, even though I have re-written the code in C++ just to ensure the previously mentioned function (this post) will help me out in archiveing my goal but it seem like SDK does not have that much capabilities to serve.

My problem is something which will directly help others too, once solved, so I beelive that a proper resolution is very much helpfull.

Again providing the deep context on this problem.

  • Get a SVO2 recording file which have GNSS data already ingested and load it to zed object using below code with also initializing PositionTracking module.
int main() {

    // Create a ZED camera object
    Camera zed;

    sl::FusedPointCloud finalObj;

    // Set configuration parameters
    InitParameters init_parameters;
    init_parameters.coordinate_system = COORDINATE_SYSTEM::IMAGE; // Use a right-handed Y-up coordinate system
    init_parameters.coordinate_units = UNIT::METER; // Set units in meters
    init_parameters.depth_mode = DEPTH_MODE::ULTRA; // Set units in meters
    init_parameters.input.setFromSVOFile("zed-sdk/global localization/recording/python/output/ZED_SN35169074_25-03-2025_17-08-22.svo2");
    init_parameters.sdk_verbose = 1;

    // Open the camera
    auto returned_state = zed.open(init_parameters);
    if (returned_state != ERROR_CODE::SUCCESS) {
        cout << "Error " << returned_state << ", exit program.\n";
        return EXIT_FAILURE;
    }

    // Enable positional tracking with default parameters. Positional tracking needs to be enabled before using spatial mapping
    sl::PositionalTrackingParameters tracking_parameters;
    tracking_parameters.mode = sl::POSITIONAL_TRACKING_MODE::GEN_2;
    returned_state = zed.enablePositionalTracking(tracking_parameters);
    if (returned_state != ERROR_CODE::SUCCESS) {
        cout << "Error " << returned_state << ", exit program.\n";
        return EXIT_FAILURE;
    }

    auto display_resolution = sl::Resolution(1280, 720);


    sl::Fusion fusion;
    sl::InitFusionParameters init_fusion_param;
    init_fusion_param.coordinate_units = UNIT::METER;
    init_fusion_param.coordinate_system = COORDINATE_SYSTEM::IMAGE;
    init_fusion_param.output_performance_metrics = true;
    init_fusion_param.verbose = true;
    init_fusion_param.timeout_period_number = 66;

    sl::FUSION_ERROR_CODE fusion_init_code = fusion.init(init_fusion_param);
    if (fusion_init_code != sl::FUSION_ERROR_CODE::SUCCESS) {
        std::cerr << "[Fusion][ERROR] Failed to initialize fusion, error: " << fusion_init_code << std::endl;
        return EXIT_FAILURE;
    }
    // Enable odometer publishing:
    sl::CommunicationParameters configuration;
    zed.startPublishing(configuration);
    /// Run a first grab for starting sending data:
    while (zed.grab() != sl::ERROR_CODE::SUCCESS) {
    }

    sl::CameraIdentifier uuid(zed.getCameraInformation().serial_number);
    fusion.subscribe(uuid, configuration, sl::Transform());
    // Enable positional tracking for Fusion object
    sl::PositionalTrackingFusionParameters tracking_fusion_param;
    tracking_fusion_param.enable_GNSS_fusion = true;
    sl::GNSSCalibrationParameters gnss_calibration_parameters;
    gnss_calibration_parameters.target_yaw_uncertainty = 1e-2;
    gnss_calibration_parameters.enable_translation_uncertainty_target = false;
    gnss_calibration_parameters.target_translation_uncertainty = 15e-2;
    gnss_calibration_parameters.enable_reinitialization = false;
    gnss_calibration_parameters.gnss_vio_reinit_threshold = 5;
    gnss_calibration_parameters.enable_rolling_calibration = true;
    gnss_calibration_parameters.gnss_antenna_position = sl::float3 (1.0, 0.0, 0.25);
    tracking_fusion_param.gnss_calibration_parameters = gnss_calibration_parameters;
    if(sl::FUSION_ERROR_CODE tracking_error_code = fusion.enablePositionalTracking(tracking_fusion_param); tracking_error_code != sl::FUSION_ERROR_CODE::SUCCESS){
        std::cout << "[Fusion][ERROR] Could not start tracking, error: " << tracking_error_code << std::endl;
        return EXIT_FAILURE;
    }

    SpatialMappingParameters spatial_mapping_parameters;
    spatial_mapping_parameters.set(SpatialMappingParameters::MAPPING_RANGE::AUTO);
    spatial_mapping_parameters.set(SpatialMappingParameters::MAPPING_RESOLUTION::HIGH);
    spatial_mapping_parameters.max_memory_usage = 2560;
    spatial_mapping_parameters.save_texture = false;
    spatial_mapping_parameters.use_chunk_only = true;
    spatial_mapping_parameters.reverse_vertex_order = false;
    spatial_mapping_parameters.map_type = SpatialMappingParameters::SPATIAL_MAP_TYPE::FUSED_POINT_CLOUD;
    spatial_mapping_parameters.stability_counter = 2;

    FusedPointCloud zed_mesh;
    FusedPointCloud fusion_mesh;

    RuntimeParameters runtime_parameters;
    // Use low depth confidence to avoid introducing noise in the constructed model
    runtime_parameters.measure3D_reference_frame =  REFERENCE_FRAME::WORLD;
    runtime_parameters.enable_depth = true;
    runtime_parameters.confidence_threshold = 50;
    Mat left_image(display_resolution, MAT_TYPE::U8_C4, sl::MEM::GPU);
    Mat point_cloud(display_resolution, MAT_TYPE::F32_C4, sl::MEM::GPU);
    POSITIONAL_TRACKING_STATE tracking_state = POSITIONAL_TRACKING_STATE::OFF;
    Pose pose;

    sl::Timestamp last_timestamp = 0;
    bool is_running = true;

    std::string gnss_key;
    try {
        gnss_key = zed.getSVODataKeys()[0];
    }catch (const std::runtime_error &e) {
        std::cerr << "Error while Locating GNSS Key in SVO File: " << e.what() << std::endl;
    }
  • Looping and grabbing frames from Zed Object and retreiveing image, matching timestamp and ingesting GNSS data to fusion object then enabling spatialMapping and trying to get a referenced Point cloud but don’t know how to save referenced point cloud data and how reference point cloud is created and where does it stored. Below is the contiuos of code along side the explanation.
    bool mapping_enable = false;
    bool spatial_mapping_fusion_status = false;
    while(is_running) {
        if (zed.grab(runtime_parameters) == ERROR_CODE::SUCCESS) {
            // Retrieve the left image
            zed.retrieveImage(left_image, VIEW::LEFT, MEM::GPU, display_resolution);
            zed.retrieveMeasure(point_cloud, MEASURE::XYZBGRA, MEM::GPU, display_resolution);
            // Retrieve the camera pose data
            tracking_state = zed.getPosition(pose, REFERENCE_FRAME::WORLD);
            if (tracking_state != POSITIONAL_TRACKING_STATE::OK) {
                std::cout << "Waiting for Positional Tracking to Begin" << std::endl;
            }

            auto current_ts = zed.getTimestamp(TIME_REFERENCE::IMAGE);

            // auto status  use status in real problem
            if (gnss_key.empty()) {
                std::cout << "GNSS Key is empty. Killing Program" << std::endl;
                break;
            }
            std::map<sl::Timestamp, sl::SVOData> data_map;
            zed.retrieveSVOData(gnss_key,
                data_map,
                last_timestamp,
                current_ts);

            sl::GNSSData input_gnss;
            for (auto &[fst, snd] : data_map) {
                SVOData& data = snd;  // Accessing the value (SVOData)
                std::string  s;
                bool data_string = data.getContent(s);

                auto tmp_array = json::array();
                try {
                    auto gnss_data_point = json::parse(s);
                    auto gnss_data_point_formatted = json::object();

                    if (!gnss_data_point["coordinates"].is_null()) {
                        gnss_data_point_formatted["coordinates"] = {
                            {"latitude", gnss_data_point["coordinates"]["latitude"].get<float>()},
                            {"longitude", gnss_data_point["coordinates"]["longitude"].get<float>()},
                            {"altitude", gnss_data_point["coordinates"]["altitude"].get<float>()},
                        };
                        gnss_data_point_formatted["ts"] = gnss_data_point["ts"];

                        float latitude_std = gnss_data_point["latitude_std"];
                        float longitude_std = gnss_data_point["longitude_std"];
                        float altitude_std = gnss_data_point["altitude_std"];

                        gnss_data_point_formatted["latitude_std"] = sqrt(latitude_std);
                        gnss_data_point_formatted["longitude_std"] = sqrt(longitude_std);
                        gnss_data_point_formatted["altitude_std"] = sqrt(altitude_std);

                        gnss_data_point_formatted["position_covariance"] = json::array({
                            longitude_std, 0, 0, 0, latitude_std, 0, 0, 0, altitude_std
                        });

                        gnss_data_point_formatted["original_gnss_data"] = gnss_data_point;
                    } else if (!gnss_data_point["coordinates"].is_null() && !gnss_data_point["latitude_std"].is_null() && !gnss_data_point["longitude_std"].is_null()) {
                        // no conversion
                        gnss_data_point_formatted = gnss_data_point;
                    }

                    tmp_array.push_back(gnss_data_point_formatted);
                } catch (const std::runtime_error &e) {
                    std::cerr << "Error while reading GNSS data: " << e.what() << std::endl;
                }

                input_gnss.gnss_mode = GNSS_MODE::FIX_2D;
                input_gnss.gnss_status = GNSS_STATUS::SINGLE;
                input_gnss.longitude_std = tmp_array[0]["longitude_std"];
                input_gnss.latitude_std = tmp_array[0]["latitude_std"];
                input_gnss.altitude_std = tmp_array[0]["altitude_std"];
                input_gnss.position_covariance = tmp_array[0]["position_covariance"];

                if (is_microseconds(tmp_array[0]["ts"].get<uint64_t>())){
                    input_gnss.ts.setMicroseconds(tmp_array[0]["ts"].get<uint64_t>());
                }
                else if (is_nanoseconds(tmp_array[0]["ts"].get<uint64_t>())){
                    input_gnss.ts.setNanoseconds(tmp_array[0]["ts"].get<uint64_t>());
                }
                input_gnss.setCoordinates(tmp_array[0]["coordinates"]["latitude"],
                    tmp_array[0]["coordinates"]["longitude"],
                    tmp_array[0]["coordinates"]["altitude"],
                    false
                    );
                sl::FUSION_ERROR_CODE ingest_status = fusion.ingestGNSSData(input_gnss);
                if (ingest_status != sl::FUSION_ERROR_CODE::SUCCESS and ingest_status != sl::FUSION_ERROR_CODE::NO_NEW_DATA_AVAILABLE) {
                    std::cout << "Ingest error occurred when ingesting GNSSData: " << ingest_status << std::endl;
                }else {
                    std::cout << "Ingest OK" << std::endl;
                    break;
                }
            }
            last_timestamp = current_ts;
            if (fusion.process() == sl::FUSION_ERROR_CODE::SUCCESS){
                if (!mapping_enable) {
                    sl::ERROR_CODE mapping_status = zed.enableSpatialMapping(spatial_mapping_parameters);
                    if (mapping_status == sl::ERROR_CODE::SUCCESS) {
                        mapping_enable = true;
                    }
                }
                if (!spatial_mapping_fusion_status) {
                    sl::SpatialMappingFusionParameters spatial_mapping_fusion_parameters;
                    spatial_mapping_fusion_parameters.set(SpatialMappingParameters::MAPPING_RANGE::AUTO);
                    spatial_mapping_fusion_parameters.set(SpatialMappingParameters::MAPPING_RESOLUTION::HIGH);
                    spatial_mapping_fusion_parameters.max_memory_usage = 2000;
                    spatial_mapping_fusion_parameters.use_chunk_only = true;
                    spatial_mapping_fusion_parameters.map_type = SpatialMappingParameters::SPATIAL_MAP_TYPE::FUSED_POINT_CLOUD;
                    spatial_mapping_fusion_parameters.stability_counter = 2;
                    sl::FUSION_ERROR_CODE fusion_mapping_status = fusion.enableSpatialMapping(spatial_mapping_fusion_parameters);
                    if (fusion_mapping_status == sl::FUSION_ERROR_CODE::SUCCESS) {
                        spatial_mapping_fusion_status = true;
                    }
                }
// How to work from here ?
                sl::Pose fused_pose;
                sl::POSITIONAL_TRACKING_STATE state = fusion.getPosition(fused_pose);
                float yaw_std;
                sl::float3 position_std;
                fusion.getCurrentGNSSCalibrationSTD(yaw_std, position_std);
                if (yaw_std != -1.f) {
                    std::cout << ": calibration uncertainty yaw_std " << yaw_std << " rad position_std " << position_std
                            [0] << " m, " << position_std[1] << " m, " << position_std[2] << " m\t\t\t\r";
                }

                if (state == sl::POSITIONAL_TRACKING_STATE::OK) {
                    sl::GeoPose fused_geo_pose;
                    sl::GNSS_FUSION_STATUS gnss_fusion_status = fusion.getGeoPose(fused_geo_pose);
                    sl::GNSS_FUSION_STATUS geo_fusion_status = fusion.Camera2Geo(fused_pose, fused_geo_pose);
                    // zed.requestSpatialMapAsync();
                    fusion.requestSpatialMapAsync();
                    if (fusion.getSpatialMapRequestStatusAsync() == sl::FUSION_ERROR_CODE::SUCCESS) {
                        fusion.retrieveSpatialMapAsync(fusion_mesh);
                        fusion_mesh.save("test");
                    }
                    // zed.retrieveSpatialMapAsync(zed_mesh);
                    // fusion.getGeoPose();


                    // finalObj.updateFromChunkList(fused_geo_pose.pose_data)

                }
            }
        }
        else if (zed.grab() == ERROR_CODE::END_OF_SVOFILE_REACHED) {
            is_running = false;
            std::cout << "End of file";
            break;
        }
        else {
            is_running = false;
        }
    }
    // zed.extractWholeSpatialMap(zed_mesh);
    // zed_mesh.save("/home/Documents/test_cpp_zed.ply");
    fusion_mesh.save("/home/Documents/test_cpp_fused.ply");

    fusion.disablePositionalTracking();
    zed.disablePositionalTracking();
    zed.disableSpatialMapping();
    fusion.close();
    zed.close();
}

Also, I have attached the code here :point_right:
main.cpp (13.7 KB)

Please help me out, as this is urgent and need attention we have 6 zed camera to do a lot of stuff and we are stuck on our first goal.

If possible please suggest a time where you can reply promptly to assist us better or can we have a meeting to discuss the problem.

1 Like

You can find the SVO file here: ZED_SN35169074_25-03-2025_17-10-01.svo2 - Google Drive

Hi, team I am working on Creating GeoReferenced Point Cloud using ZED SDK, So far I have tried almost all avilable possibilities and it seems like the SDK is either lacking the capabilities or I am doing something wrong.

I am new to the SDK and will appreciate the help. This related topic does not seems that there is no a new inputs to the question.

Please help me resolving the issue.

Thanks
Aman Ranjan

Hi @ranjanport
Welcome to the Stereolabs community.

Please note that your question is off topic in this thread.
Please create a new thread posting all your questions and doubts.

@TanguyHardelin

Please suggest…