Using Zed SDK to generate GeoReferenced Point Cloud

Hi, Team @Myzhar

I am working on Creating GeoReferenced Point Cloud using ZED SDK, So far I have tried almost all avilable possibilities and it seems like the SDK is either lacking the capabilities or I am doing something wrong.

I am new to the SDK and will appreciate the help. I have tried using the code from the fusion, spatial mapping & positional tracking modules merged in a single script to process the svo file and get the point cloud which is georeferenced but while doing this I have stucked where I get the position from the getGeoPose function from fusion object and then used camera2Geo as suggested in various topics. But there is no luck, it would be great if you review my code and let me know if there is something I am doing wrong or just tell that this is not possible.

Here is the attached code for your reference :point_down:

#include <sl/Camera.hpp>
#include  <sl/Fusion.hpp>
#include <cmath>
#include <nlohmann/json.hpp>

using json = nlohmann::json;
using namespace std;
using namespace sl;

inline bool is_microseconds(uint64_t timestamp) {
    // Check if the timestamp is in microseconds
    return (1'000'000'000'000'000 <= timestamp && timestamp < 10'000'000'000'000'000ULL);
}

inline bool is_nanoseconds(uint64_t timestamp) {
    // Check if the timestamp is in microseconds
    return (1'000'000'000'000'000'000 <= timestamp && timestamp < 10'000'000'000'000'000'000ULL);
}

int main() {

    // Create a ZED camera object
    Camera zed;

    sl::FusedPointCloud finalObj;

    // Set configuration parameters
    InitParameters init_parameters;
    init_parameters.coordinate_system = COORDINATE_SYSTEM::IMAGE; // Use a right-handed Y-up coordinate system
    init_parameters.coordinate_units = UNIT::METER; // Set units in meters
    init_parameters.depth_mode = DEPTH_MODE::ULTRA; // Set units in meters
    init_parameters.input.setFromSVOFile("zed-sdk/global localization/recording/python/output/ZED_SN35169074_25-03-2025_17-08-22.svo2");
    init_parameters.sdk_verbose = 1;

    // Open the camera
    auto returned_state = zed.open(init_parameters);
    if (returned_state != ERROR_CODE::SUCCESS) {
        cout << "Error " << returned_state << ", exit program.\n";
        return EXIT_FAILURE;
    }

    // Enable positional tracking with default parameters. Positional tracking needs to be enabled before using spatial mapping
    sl::PositionalTrackingParameters tracking_parameters;
    tracking_parameters.mode = sl::POSITIONAL_TRACKING_MODE::GEN_2;
    returned_state = zed.enablePositionalTracking(tracking_parameters);
    if (returned_state != ERROR_CODE::SUCCESS) {
        cout << "Error " << returned_state << ", exit program.\n";
        return EXIT_FAILURE;
    }

    auto display_resolution = sl::Resolution(1280, 720);


    sl::Fusion fusion;
    sl::InitFusionParameters init_fusion_param;
    init_fusion_param.coordinate_units = UNIT::METER;
    init_fusion_param.coordinate_system = COORDINATE_SYSTEM::IMAGE;
    init_fusion_param.output_performance_metrics = true;
    init_fusion_param.verbose = true;
    init_fusion_param.timeout_period_number = 66;

    sl::FUSION_ERROR_CODE fusion_init_code = fusion.init(init_fusion_param);
    if (fusion_init_code != sl::FUSION_ERROR_CODE::SUCCESS) {
        std::cerr << "[Fusion][ERROR] Failed to initialize fusion, error: " << fusion_init_code << std::endl;
        return EXIT_FAILURE;
    }
    // Enable odometer publishing:
    sl::CommunicationParameters configuration;
    zed.startPublishing(configuration);
    /// Run a first grab for starting sending data:
    while (zed.grab() != sl::ERROR_CODE::SUCCESS) {
    }

    sl::CameraIdentifier uuid(zed.getCameraInformation().serial_number);
    fusion.subscribe(uuid, configuration, sl::Transform());
    // Enable positional tracking for Fusion object
    sl::PositionalTrackingFusionParameters tracking_fusion_param;
    tracking_fusion_param.enable_GNSS_fusion = true;
    sl::GNSSCalibrationParameters gnss_calibration_parameters;
    gnss_calibration_parameters.target_yaw_uncertainty = 1e-2;
    gnss_calibration_parameters.enable_translation_uncertainty_target = false;
    gnss_calibration_parameters.target_translation_uncertainty = 15e-2;
    gnss_calibration_parameters.enable_reinitialization = false;
    gnss_calibration_parameters.gnss_vio_reinit_threshold = 5;
    gnss_calibration_parameters.enable_rolling_calibration = true;
    gnss_calibration_parameters.gnss_antenna_position = sl::float3 (1.0, 0.0, 0.25);
    tracking_fusion_param.gnss_calibration_parameters = gnss_calibration_parameters;
    if(sl::FUSION_ERROR_CODE tracking_error_code = fusion.enablePositionalTracking(tracking_fusion_param); tracking_error_code != sl::FUSION_ERROR_CODE::SUCCESS){
        std::cout << "[Fusion][ERROR] Could not start tracking, error: " << tracking_error_code << std::endl;
        return EXIT_FAILURE;
    }

    SpatialMappingParameters spatial_mapping_parameters;
    spatial_mapping_parameters.set(SpatialMappingParameters::MAPPING_RANGE::AUTO);
    spatial_mapping_parameters.set(SpatialMappingParameters::MAPPING_RESOLUTION::HIGH);
    spatial_mapping_parameters.max_memory_usage = 2560;
    spatial_mapping_parameters.save_texture = false;
    spatial_mapping_parameters.use_chunk_only = true;
    spatial_mapping_parameters.reverse_vertex_order = false;
    spatial_mapping_parameters.map_type = SpatialMappingParameters::SPATIAL_MAP_TYPE::FUSED_POINT_CLOUD;
    spatial_mapping_parameters.stability_counter = 2;

    FusedPointCloud zed_mesh;
    FusedPointCloud fusion_mesh;

    RuntimeParameters runtime_parameters;
    // Use low depth confidence to avoid introducing noise in the constructed model
    runtime_parameters.measure3D_reference_frame =  REFERENCE_FRAME::WORLD;
    runtime_parameters.enable_depth = true;
    runtime_parameters.confidence_threshold = 50;
    Mat left_image(display_resolution, MAT_TYPE::U8_C4, sl::MEM::GPU);
    Mat point_cloud(display_resolution, MAT_TYPE::F32_C4, sl::MEM::GPU);
    POSITIONAL_TRACKING_STATE tracking_state = POSITIONAL_TRACKING_STATE::OFF;
    Pose pose;

    sl::Timestamp last_timestamp = 0;
    bool is_running = true;

    std::string gnss_key;
    try {
        gnss_key = zed.getSVODataKeys()[0];
    }catch (const std::runtime_error &e) {
        std::cerr << "Error while Locating GNSS Key in SVO File: " << e.what() << std::endl;
    }

    bool mapping_enable = false;
    bool spatial_mapping_fusion_status = false;
    while(is_running) {
        if (zed.grab(runtime_parameters) == ERROR_CODE::SUCCESS) {
            // Retrieve the left image
            zed.retrieveImage(left_image, VIEW::LEFT, MEM::GPU, display_resolution);
            zed.retrieveMeasure(point_cloud, MEASURE::XYZBGRA, MEM::GPU, display_resolution);
            // Retrieve the camera pose data
            tracking_state = zed.getPosition(pose, REFERENCE_FRAME::WORLD);
            if (tracking_state != POSITIONAL_TRACKING_STATE::OK) {
                std::cout << "Waiting for Positional Tracking to Begin" << std::endl;
            }

            auto current_ts = zed.getTimestamp(TIME_REFERENCE::IMAGE);

            // auto status  use status in real problem
            if (gnss_key.empty()) {
                std::cout << "GNSS Key is empty. Killing Program" << std::endl;
                break;
            }
            std::map<sl::Timestamp, sl::SVOData> data_map;
            zed.retrieveSVOData(gnss_key,
                data_map,
                last_timestamp,
                current_ts);

            sl::GNSSData input_gnss;
            for (auto &[fst, snd] : data_map) {
                SVOData& data = snd;  // Accessing the value (SVOData)
                std::string  s;
                bool data_string = data.getContent(s);

                auto tmp_array = json::array();
                try {
                    auto gnss_data_point = json::parse(s);
                    auto gnss_data_point_formatted = json::object();

                    if (!gnss_data_point["coordinates"].is_null()) {
                        gnss_data_point_formatted["coordinates"] = {
                            {"latitude", gnss_data_point["coordinates"]["latitude"].get<float>()},
                            {"longitude", gnss_data_point["coordinates"]["longitude"].get<float>()},
                            {"altitude", gnss_data_point["coordinates"]["altitude"].get<float>()},
                        };
                        gnss_data_point_formatted["ts"] = gnss_data_point["ts"];

                        float latitude_std = gnss_data_point["latitude_std"];
                        float longitude_std = gnss_data_point["longitude_std"];
                        float altitude_std = gnss_data_point["altitude_std"];

                        gnss_data_point_formatted["latitude_std"] = sqrt(latitude_std);
                        gnss_data_point_formatted["longitude_std"] = sqrt(longitude_std);
                        gnss_data_point_formatted["altitude_std"] = sqrt(altitude_std);

                        gnss_data_point_formatted["position_covariance"] = json::array({
                            longitude_std, 0, 0, 0, latitude_std, 0, 0, 0, altitude_std
                        });

                        gnss_data_point_formatted["original_gnss_data"] = gnss_data_point;
                    } else if (!gnss_data_point["coordinates"].is_null() && !gnss_data_point["latitude_std"].is_null() && !gnss_data_point["longitude_std"].is_null()) {
                        // no conversion
                        gnss_data_point_formatted = gnss_data_point;
                    }

                    tmp_array.push_back(gnss_data_point_formatted);
                } catch (const std::runtime_error &e) {
                    std::cerr << "Error while reading GNSS data: " << e.what() << std::endl;
                }

                input_gnss.gnss_mode = GNSS_MODE::FIX_2D;
                input_gnss.gnss_status = GNSS_STATUS::SINGLE;
                input_gnss.longitude_std = tmp_array[0]["longitude_std"];
                input_gnss.latitude_std = tmp_array[0]["latitude_std"];
                input_gnss.altitude_std = tmp_array[0]["altitude_std"];
                input_gnss.position_covariance = tmp_array[0]["position_covariance"];

                if (is_microseconds(tmp_array[0]["ts"].get<uint64_t>())){
                    input_gnss.ts.setMicroseconds(tmp_array[0]["ts"].get<uint64_t>());
                }
                else if (is_nanoseconds(tmp_array[0]["ts"].get<uint64_t>())){
                    input_gnss.ts.setNanoseconds(tmp_array[0]["ts"].get<uint64_t>());
                }
                input_gnss.setCoordinates(tmp_array[0]["coordinates"]["latitude"],
                    tmp_array[0]["coordinates"]["longitude"],
                    tmp_array[0]["coordinates"]["altitude"],
                    false
                    );
                sl::FUSION_ERROR_CODE ingest_status = fusion.ingestGNSSData(input_gnss);
                if (ingest_status != sl::FUSION_ERROR_CODE::SUCCESS and ingest_status != sl::FUSION_ERROR_CODE::NO_NEW_DATA_AVAILABLE) {
                    std::cout << "Ingest error occurred when ingesting GNSSData: " << ingest_status << std::endl;
                }else {
                    std::cout << "Ingest OK" << std::endl;
                    break;
                }
            }
            last_timestamp = current_ts;
            if (fusion.process() == sl::FUSION_ERROR_CODE::SUCCESS){
                if (!mapping_enable) {
                    sl::ERROR_CODE mapping_status = zed.enableSpatialMapping(spatial_mapping_parameters);
                    if (mapping_status == sl::ERROR_CODE::SUCCESS) {
                        mapping_enable = true;
                    }
                }
                if (!spatial_mapping_fusion_status) {
                    sl::SpatialMappingFusionParameters spatial_mapping_fusion_parameters;
                    spatial_mapping_fusion_parameters.set(SpatialMappingParameters::MAPPING_RANGE::AUTO);
                    spatial_mapping_fusion_parameters.set(SpatialMappingParameters::MAPPING_RESOLUTION::HIGH);
                    spatial_mapping_fusion_parameters.max_memory_usage = 2000;
                    spatial_mapping_fusion_parameters.use_chunk_only = true;
                    spatial_mapping_fusion_parameters.map_type = SpatialMappingParameters::SPATIAL_MAP_TYPE::FUSED_POINT_CLOUD;
                    spatial_mapping_fusion_parameters.stability_counter = 2;
                    sl::FUSION_ERROR_CODE fusion_mapping_status = fusion.enableSpatialMapping(spatial_mapping_fusion_parameters);
                    if (fusion_mapping_status == sl::FUSION_ERROR_CODE::SUCCESS) {
                        spatial_mapping_fusion_status = true;
                    }
                }

                sl::Pose fused_pose;
                sl::POSITIONAL_TRACKING_STATE state = fusion.getPosition(fused_pose);
                float yaw_std;
                sl::float3 position_std;
                fusion.getCurrentGNSSCalibrationSTD(yaw_std, position_std);
                if (yaw_std != -1.f) {
                    std::cout << ": calibration uncertainty yaw_std " << yaw_std << " rad position_std " << position_std
                            [0] << " m, " << position_std[1] << " m, " << position_std[2] << " m\t\t\t\r";
                }

                if (state == sl::POSITIONAL_TRACKING_STATE::OK) {
                    sl::GeoPose fused_geo_pose;
                    sl::GNSS_FUSION_STATUS gnss_fusion_status = fusion.getGeoPose(fused_geo_pose);
                    sl::GNSS_FUSION_STATUS geo_fusion_status = fusion.Camera2Geo(fused_pose, fused_geo_pose);
                    // zed.requestSpatialMapAsync();
                    fusion.requestSpatialMapAsync();
                    if (fusion.getSpatialMapRequestStatusAsync() == sl::FUSION_ERROR_CODE::SUCCESS) {
                        fusion.retrieveSpatialMapAsync(fusion_mesh);
                        fusion_mesh.save("test");
                    }
                    // zed.retrieveSpatialMapAsync(zed_mesh);
                    // fusion.getGeoPose();


                    // finalObj.updateFromChunkList(fused_geo_pose.pose_data)

                }
            }
        }
        else if (zed.grab() == ERROR_CODE::END_OF_SVOFILE_REACHED) {
            is_running = false;
            std::cout << "End of file";
            break;
        }
        else {
            is_running = false;
        }
    }
    // zed.extractWholeSpatialMap(zed_mesh);
    // zed_mesh.save("test-out.ply");
    fusion_mesh.save("fused-out.ply");

    fusion.disablePositionalTracking();
    zed.disablePositionalTracking();
    zed.disableSpatialMapping();
    fusion.close();
    zed.close();
}

Please help me resolving the issue.

Thanks
Aman Ranjan

@Myzhar @TanguyHardelin

Gentle Reminder!!

Hi @ranjanport,

I’ve reviewed your code and have some suggestions to enhance your 3D reconstruction process:

  1. Depth Mode: Set your depth mode to NEURAL for more accurate depth, which will improve the reconstruction quality.
  2. Update to 5.0 Release: If possible, switch to our 5.0 release. It offers a faster and more accurate depth mode, along with our new GEN_3 positional tracking mode.

Regarding your code, it should be capable of providing VIO / GNSS fusion. Could you please confirm this? Once you have calibrated VIO and GNSS, you can use the Camera2Geo function to transform any position from the current camera pose world to the global GeoPose.

Best regards,
Tanguy

Hi, @TanguyHardelin
I have already switched to v5.0 of SDK and will definitely try the GEN_3 possibilities.
But my question remains the same when I this condition

if (state == sl::POSITIONAL_TRACKING_STATE::OK) {
    sl::GeoPose fused_geo_pose;
    sl::GNSS_FUSION_STATUS gnss_fusion_status = fusion.getGeoPose(fused_geo_pose);
    sl::GNSS_FUSION_STATUS geo_fusion_status = fusion.Camera2Geo(fused_pose, fused_geo_pose);
    // zed.requestSpatialMapAsync();
    fusion.requestSpatialMapAsync();
    if (fusion.getSpatialMapRequestStatusAsync() == sl::FUSION_ERROR_CODE::SUCCESS) {
        fusion.retrieveSpatialMapAsync(fusion_mesh);
        fusion_mesh.save("test");
}

The getGeoPose function provides the fusedGeoPosition to the variable but when we use the fusion.Camera2Geo(fused_pose, fused_geo_pose) the fused_geo_pose does not contains anything somehow it seems that the variable is not being populated. May I know why this happens, also how to save this fused_geo_pose to the file.

Thanks

Hello @ranjanport,
I think you are looking to the wrong status, you should start to use Camera2Geo when the gnss_fusion_status goes to ON. When it is available the result of your code should produce the same output as getGeoPose. Since you are passing it the current position of camera. The function Camera2Geo just convert a 3D position in the camera world (meaning related to where the positional tracking start) to a global world position (once the calibration is finished).

That’s said, your fusion_mesh object is a point cloud containing a lot of 3D-position in camera world. What you should do is to call the Camera2Geo for each point present in this point cloud. Then for each point you should have a lat/lng/alt coordinate.

You are not the first asking this kind of thing we may release a sample for doing this kind of thing. I think it will be useful.

Hi, @TanguyHardelin

I insist to share the sample as soon as possible, because I have also seen some post similar to mine and I believe that a script of sample is more valuable for me and others too. Please provide a sample of how to doing this in any possible language.

Actually, the need of doing this is that for my project I want to use multiple cameras that I have to build a great set of software to generate point cloud which can be used directly for digitisation in map industries which will provide us more accurate data during the process.

Also, if possible please share the code with me.

Thanks

Hi, @TanguyHardelin
Any update on the script ? It would be benifical for me if you can escalate the process.

TIA