Out of memory issue on high quality Point cloud generation from an svo file


I want to generate a point cloud with a video taken from a moving vehicle, I currently use a modified version of this zed mapping example but i’m running out of memory during the process.

As the vehicle is moving forward i dont think i need to keep the old points/chunk that are behind the camera, so i look for a clean way to save points associated with old frames, and keep the memory at a descent level.

Currently, the only way i have find to clean the memory is by restarting the spatial mapping from the frame i’m currently on, by calling enable_spatial_mapping(spatial_mapping_parameters).

My actual workaround is:

  • Wait until there is enough points inside my FusedPointCloud() (currently 2 million).
  • Save them in a temp file.
  • Clean the memory.
  • Repeat until all frame are processed
  • Merge all temps files in one.

My main problem with that workaround is gaps that appairs between the sub clouds points. I assume this is due to the mapping restarted by calling enable_spatial_mapping(spatial_mapping_parameters)

import sys
import time
import pyzed.sl as sl

def main():
    # Create a Camera object
    zed = sl.Camera()
    outpath = 'out_datas/mtt10-0004'
    points_number_by_parts = 2000000

    # Create a InitParameters object and set configuration parameters
    init_params = sl.InitParameters()
    init_params.camera_resolution = sl.RESOLUTION.HD2K  # Use HD720 video mode
    init_params.coordinate_units = sl.UNIT.METER  # Set coordinate units
    init_params.coordinate_system = sl.COORDINATE_SYSTEM.RIGHT_HANDED_Y_UP  # OpenGL coordinates
    init_params.depth_mode = sl.DEPTH_MODE.ULTRA
    init_params.depth_maximum_distance = 40
    init_params.depth_minimum_distance = -1
    init_params.depth_stabilization = 1

    # If applicable, use the SVO given as parameter
    # Otherwise use ZED live stream
    if len(sys.argv) == 2:
        filepath = sys.argv[1]
        print("Using SVO file: {0}".format(filepath))

    # Open the camera
    status = zed.open(init_params)
    if status != sl.ERROR_CODE.SUCCESS:

    runtime_parameters = sl.RuntimeParameters()
    runtime_parameters.sensing_mode = sl.SENSING_MODE.FILL  # Use STANDARD sensing mode
    # Setting the depth confidence parameters
    runtime_parameters.confidence_threshold = 50
    runtime_parameters.textureness_confidence_threshold = 100
    # Get camera parameters
    pose = sl.Pose()  # Camera pose tracking data

    pymesh = sl.FusedPointCloud()  # Current incremental FusedPointCloud
    # image = sl.Mat()  # Left image from camera

    spatial_mapping_parameters = sl.SpatialMappingParameters()
    spatial_mapping_parameters.resolution_meter = 0.01
    spatial_mapping_parameters.range_meter = 10
    spatial_mapping_parameters.max_memory_usage = 24000
    spatial_mapping_parameters.use_chunk_only = False
    spatial_mapping_parameters.save_texture = True  # Set to True to apply texture over the created mesh

    # Enable positional tracking
    err = zed.enable_positional_tracking()
    if err != sl.ERROR_CODE.SUCCESS:

    init_pose = sl.Transform()

    # Configure spatial mapping parameters
    spatial_mapping_parameters.map_type = sl.SPATIAL_MAP_TYPE.FUSED_POINT_CLOUD

    # Enable spatial mapping

    # Clear previous mesh data

    parts = []
    # Enable spatial mapping
    nb_frames = zed.get_svo_number_of_frames()
    svo_position = 0
    while svo_position < (nb_frames - 1):
        if zed.grab(runtime_parameters) == sl.ERROR_CODE.SUCCESS:
            svo_position = zed.get_svo_position()
            # Grab an image, a RuntimeParameters object must be given to grab()

            zed.get_position(pose, sl.REFERENCE_FRAME.WORLD)
            state = zed.get_spatial_mapping_state()
            while zed.get_spatial_map_request_status_async() != sl.ERROR_CODE.SUCCESS:
                print("Wait until the mesh is be ready", end='\r')
            # Save mesh as an obj file
        # state = zed.get_spatial_mapping_state()
        if pymesh.get_number_of_points() > points_number_by_parts:
            save_part = f"{outpath}_part{str(len(parts) + 1).rjust(5, '0')}.obj"
            status = pymesh.save(save_part)
            if status:
                print(f"\nsaved {pymesh.get_number_of_points()} points under " + save_part)
                print(f"\nFailed to save under " + save_part)

            # Clear previous mesh data
        print(f"{svo_position}/{nb_frames - 1}", end='\r')

    #extract any remaining points
    # Save the last mesh as an obj file
    save_part = f"{outpath}_part{str(len(parts) + 1).rjust(5, '0')}.obj"
    status = pymesh.save(save_part)
    if status:
        print(f"\nSaved {pymesh.get_number_of_points()} points under " + save_part)
        print(f"\nFailed to save under " + save_part)
    # pymesh.save(f"{outpath}.obj")

    # Disable modules and close camera

    print("merge parts files")
    with open(f"{outpath}.obj", 'w+') as finalFile:
        for part_path in parts:
            with open(part_path) as infile:
                line_number = 0
                print(f"extract points from {part_path} and write them in {outpath}.obj")
                for line in infile:
                    if not line.startswith('#'):
                        line_number += 1
                        print(f"    {line_number} lines writed", end='\r')

if __name__ == "__main__":

ps: I also have a concern about the point cloud quality i can obtain with that code that seems less good than the quality shown by using ZEDfu.

Here is some advices to avoid this artefacts (or at least reduced them):

As shown in the sample, avoid to request the mesh at every frame, wait at least 0.5s between two calls. Then check from time to time if the mesh is ready. Do not do this in a loop, remove your :

while zed.get_spatial_map_request_status_async() != sl.ERROR_CODE.SUCCESS:
                print("Wait until the mesh is be ready", end='\r')

Continue your mainloop instead, this way the mapping process is still operating in background as well as image grab. When the mesh is ready, retrieve it, you can call this function in a separeted thread if you don’t want to block your main loop. Then, you can check your mesh size and enable a new process if needed (no other way for the moment).

ZEDfu is just a fancy UI over the C++ API, no magic in it, purely based on public functions, so you should be able to acheive same results.

The fact that you use a very high resolution may be an issue as it request a lot of memory and computation, decreasing it a bit could result in huge runtime gain.

thanks for the answer,
My goal is to generate points from a whole video, no matter the runtime, i need to output the best quality.
Do you plan in the future to add a better way to work with highly memory-consuming usecases ? Like temporarily saving chunks that are no more in the camera fov in disk and reload them if relevant.
I can temporarily work with a hacky solution like launching a new process when the mesh becomes too big, but I plan to work with even bigger videos (3+ hours against 2 minutes currently) and I’m afraid of the possibility to scale with that solution.
Do you know if there are other of your clients that are able to work with long video rushes and if there is any link to documentation on how they are able to achieve that ?