Implementing pick with distance shader in 3d tiles

Hello, i am trying to save each pixel render position (raycast) in image generation time, instead of doing 2d to 3d since its too slow for multiple pick at once. my approach so far is to add position data with a shader. and then generate a float 3 channel image holding the x.y.z coordinate .

The shader seem to work fine. but i cant read the returned data.
1.is there a betteer way of doing this?
2. how can i read float3 data from the image?

fragmentShaderText: 
    void fragmentMain(FragmentInput fsInput, inout czm_modelMaterial material)
    {
        // Compute the distance from the camera to the fragment
        float distanceFromCamera = length(fsInput.attributes.positionEC);

        // Normalize the distance for color mapping
        float normalizedDistance = distanceFromCamera; // Adjust the divisor for scaling

        // Apply a color gradient: 
          // Near (1m) → Green, Mid (2000m) → Yellow, Far (4000m) → Red
          vec3 nearColor = vec3(1.0, 0.0, 0.0);   // Green
          vec3 midColor = vec3(1.0, 1.0, 0.0);   // Yellow
          vec3 farColor =   vec3(0.0, 1.0, 0.0); // Red

          // Blend colors smoothly
          vec3 color = mix(nearColor, midColor, smoothstep(0.0, 0.5, normalizedDistance));
          color = mix(color, farColor, smoothstep(0.5, 1.0, normalizedDistance));
          material.normalEC = vec3(normalizedDistance);
          // Apply the color to the material
          material.diffuse = vec3(normalizedDistance);
    }
});

Hi @galad ,

Thanks for your post and welcome to the Cesium community.

I unfortunately do not quite understand your question. Do you mind expanding a bit to explain what you are hoping to accomplish with this custom shader in CesiumJS?

Hopefully with a better understanding we can point you in the right direction.
Thanks,
Luke

I want to perform 2D-to-3D ray casting for multiple points in an image, including 3D tiles.

Right now, the naive approach is using viewer.scene.pick(Position);, but it takes ~20ms per point. For 500 points, this becomes too slow. There’s a PR (CesiumGS/cesium#9961) that speeds up a single pick operation, but I need something that scales better.

I’m wondering if there’s a way (via a shader or another method) to store 3D position data during rendering—so instead of an RGB image, I’d get an XYZ floating-point image where each pixel contains the corresponding 3D position.

The shader used for depth mapping shows that this information exists, but I haven’t figured out how to extract the full frame data.

  1. Does this approach make sense?
  2. How can I retrieve a 3-channel floating-point image with this data?