Georeferenced Spatial Mapping with old Fusion data

I understand the current Fusion API is still under development in terms of tying up final features, and as such the current limitations are in place:

  • Spatial Mapping - Limited to a single camera
  • GNSS Fusion - Limited to a single camera (cannot be used with spatial mapping for georeferenced mesh)

I have a current ZED X system where they have been mounted on a vehicle and I record multiple cameras to SVO files in separate chunks.

Because recording is not continuous, I cannot guarantee good GNSS data for every individual SVO file. I am therefore using global localization the entire time the system is powered on, saving the current fused position and pose to a file and referencing the frame number + timestamp.

My eventual goal is to produce a 3D textured & georeferenced map from all cameras using the pre-calculated fusion poses I have saved.

So will any future Fusion API updates allow me to do any of the following:

  • Process a 3D spatial map from multiple cameras
  • Geo reference the spatial map with GNSS data from live GNSS data
    OR
  • Geo reference the spatial map with pre-calculated Global Localisation data

And can you tell me what priority this feature has in your development cycle and whether you plan to release it in the next 3 or 6 months? (rough guideline, this will help me decide my own development priorities or whether I should implement an in-house solution.)