Python : keep data in GPU

Hello,

I developed some CUDA processing for the depth data sl.MEASURE.XYZRGBA.

I would like to avoid moving data from GPU (I expect data to be computed on GPU by ZED SDK) then to CPU and then go back to GPU.

I tried to set cam.retrieve_measure(ptCloud, sl.MEASURE.XYZRGBA , sl.MEM.GPU) then I have an error as attribute GPU does not exist (which is used on this page : Using the Depth Sensing API | Stereolabs )

How could i keep data on GPU to avoid extra transfer that cost lots of time ?

Regards

Boris

Hello,

Indeed, you cannot keep the memory on the GPU with our Python wrapper. We did not find a way to do this (yet).
However our wrapper is open source (GitHub - stereolabs/zed-python-api: Python API for the ZED SDK)
You are more than free to give us any idea you can have about this kind of CUDA usage in python.

Antoine

1 Like

Hello,

So we recently wanted to achieve the same, and I went down the rabbit hole of trying to figure this out.
I think I got an answer to this issue, and I already raised a PR: https://github.com/stereolabs/zed-python-api/pull/230) containing the changes.

It’s still not thoroughly tested, but I kinda want the community to take it from here.

Great,

I hope I will be able to test it soon.
Did you check if it is possible to use sl.MEM.GPU with the latest version of the SDK ( 4.0 ) ?
The documentation does not mention any restriction with it.

Actually, I worked on the 4.0, and the Python API still had no sl.MEM.GPU option. Additionally, the Python API only returned Numpy arrays (only CPU).