Can we calculate the depth of the rendered scene effeciently?

Hello, everyone!
I am a layman for the Cesium.

I want to get the depth of the mesh scene rendered on my screen. Currently, I can only use the function viewer.scene.pickPosition() to get each pixel’s depth of the window. But the whole computation of all pixels is time-consuming.

Is there any way to get the whole depth of the current mesh scene at one time?

Thanks and best regards,

Eric.

Hi @EricLee,

This is a great question! It seems like you are looking for a way to query through each pixel in a scene. I am not sure if our API supports a “simple” way of achieving this functionality. I’d love to learn more about the method that you are currently using to do this. What are you planning to do with the depth data that you generate? What do you mean by whole depth?

-Sam

Dear Sam,

Thanks for your attention. Currently, I just use your functions “var intersection=viewer.scene.pickPosition()” and “Cesium.Cartesian3.distance(viewer.camera.position, intersection)” to get each pixel’s depth of the rendered scene. I can use a loop to get the depths of all pixels. However, each iteration may cost around 20 ms. The time cost cannot be affordable if I want to calculate multiple scenes.

Actually, I want to know when the platform initially renders the mesh scene, does it contain the depth information of the whole scene? If it exists, we may collect it at one time. Otherwise, do you have any ideas to help realize it? or can you recommend some? Haha. Many thanks.

I want to get some RGBD datasets of the urban scene for some research use. Depth is another semantics to help the urban understanding.

Thanks and best regards,

Eric.

@EricLee

Your current solution seems intuitive to me. While it is costly in terms of runtime, I am not sure if there is a more efficient workaround.

I am not familiar with a way to collect the depth information for the whole scene as our platform renders the mesh. To the best of my knowledge, this information is not directly exposed at runtime.

Any suggestions from the rest of the community?

Best,
Sam

Dear Sam,

OK. Got it. Thank you for your information.

I may explore more about it.

Best,

Eric.

1 Like

Well, this is what sampleHeight() and sampleHeightMostDetailed() is all about, where you pass in an array of positions you want to sample the height of, and then you get that array back (in a Promise for the latter).

There’s been some talk in the past (both here and on various Git issues) the idea of having API access (or flagging to retain certain data; lots of data gets loaded and then freed up as it hits the 3D rendering engine) to height data, but it’s a tricky one as you’re trying to have the scene render fast and use as little memory as possible. I know some of that data is in the rendering engine, but unaccessible, and I don’t actually know if code can be written to access it any faster than the current way (unless the height data is made available on load, for example through an event we can subscribe to?) That way we could pull out the data we need for a given area [without needing to store the data for the whole scene]).

But they are still somewhat time-consuming still. It might be worth your time, depending on how large the area is, to look at the source data instead of the rendered scene. For example, if you need to know the precise heights across a large area, I would consider creating a service that runs on a server where you pass in an array of positions, and you’ll get back that array that has been filled from the original data (in the DEM), as on the server-side you can index these and pull out the info far quicker than sampling with a ray intersection on the UI.

Hope any of that’s helpful.

Cheers,

Alex

Hi!
Thanks for the discussion. Yeah, I think for height sampling, we can index prepared data (e.g. DEM) to have a possible quicker information extraction. Different from the height of a target position, depth is a relative value according to the camera’s position. In that case, you cannot know it in advance.

Actually, in the real world, we can use some cameras with depth-related sensors to get photos with depth information at one time. The efficiency can be very high. By contrast, in the 3D platform, it seems we can only capture the scene (Simulation of taking photos in the real world) without collecting depth info easily. The reason is that currently the scene is well rendered while the depth info is missing.

We can only remeasure a ray from the camera to the mesh scene to get the depth each time. If we can remeasure all the rays at one time, maybe we can calculate the depth info more efficiently.

Cheers,

Eric.