Depth map Unity

Hi everybody,

I am trying to figure out the depth map in Unity.
It seems the depth values I am getting are not linear?

I am reading the depthmap values like this (code screenshot).

Basically I am taking the Depth map from the zed rendering plane which is this:
depth = zedCamera.CreateTextureMeasureType(sl.MEASURE.DEPTH, resolution);

Then I am looking at a specific point in space to check the actual depth value.

As you can see those values are not linear. It seems to be some sort of exponential curve, which does not make sense. I doublechecked with the greyscale values in the depth map and I see a similar exponential augmentation towards zero when you move further away from the camera.

Is there anybody who could point me in the right direction to get actual good solid linear depth information out of the Unity sdk?

Thank you so much in advance :pray:



Hi @wimvanhenden-tool !

I think you get the correct values.
The distance measurement of the ZED SDK is the meter. From your first screenshot, you get a “1162.763” distance value for 15 feet (last value in the console).

15 ft~= 4.5m
1162/255 ~= 4.5

So you should be able to retrieve the value by not multiplying by 255, or dividing by this much afterwards.

Unfortunately, it’s not possible to change the measurement unit natively from our Unity plugin, so you need to convert the output for the unit system you need from meters.

1 Like

Hi @JPlou you are absolutely right. I forgot about the *255 that I had in there. I had that in there from checking a previous color texture. That threw me off but it all seems to be correct indeed.

Thank you!

1 Like

Thank you for this; I couldn’t figure out how to get “meaningful” values out of the textures at all; the buffering through a RT and back seems to clean -4,31e8 values to “proper” NaNs.

That said, what I’d really like is a way to detect shadows from a glsl shader. I am finally getting and properly handling floating-point depth values in glsl (it seems to be as easy as setting the texture type to RFloat and grabbing the .r of the sample), but I cannot for the life of me find a way to recognize (and e.g. mask out) shadowed values (black-black in the visual render). col.r==0, col.r<=0, isnan(col.r), isinf(col.r), col.r!=col.r and col.r==[sample from a known-shadowed corner].r all fail to trip as conditionals.

Fortunately, my immediate use case seems adequately solved by setting depth mode to NEURAL and enabling Advanced → Enable Fill Mode to simply remove shadow pixels, but we may need to do more with the raw data in the future.