Precise distance between point cloud points and an arbitrary point in world coordinates

What I am trying to achieve:

I have large point cloud. The user chooses a point P in World Coordinates (e.g., a landmark, a measuring point, a picked point of that point cloud, …). I now want to color all point cloud points according to their distance to P, let’s say, a nice rainbow gradient.

What I have tried so far in custom shaders:

  1. Comparing P with POSITION_ABSOLUTE. However, POSITION_ABSOLUTE is not precise enough. Instead of getting a nice gradient according to the distance, I got color “steps”.

  2. I used czm_inverseModel to transform P to model coordinates, and then compare P_MC to vsInput.attributes.positionMC. This made a smooth gradient. However, if I move P, the gradient “jumps” towards the new position, as if P’s movement wasn’t continuous. I guess this is because P is in World Coordinates, and when using it in the shader, it already lost precision, so it P_MC is also affected by precision issues. I thought about applying the inverse model matrix to P on CPU-side, however, that model matrix seems to be different for every tile in the point cloud. I only found the model matrix of a tileset, which is not helpful here.

  3. On CPU side, I divided P into 2 parts: P_high and P_low. I implemented float-float operators (see Implementation of float-float operators on graphics hardware - Archive ouverte HAL and its references) so that I can perform czm_inverseModel * P_floatfloat. This implementation didn’t work out - i still don’t have enough precision, or maybe there is a fault in my implementation (I was very thorough, though).

  4. As a variation, I only used P_high and transformed that with the czm_inverseModel. Then I added P_low on that result. This gave nice results, however, sadly, this only works when the inverse model contains translation only (no rotation, no scaling), which is an assumption I cannot make for my use case.

Digging into cesium, I found EncodedCartesian3 and czm_translateRelativeToEye, so I thought, I could transform P and point cloud vertices Relative To Eye. But I did not understand how I could make this work for my use case.

  1. czm_translateRelativeToEye expects model coordinates. What is the result? Still model coordinates, but translated?
  2. Since P is still in World Coordinates, how do I apply the “relative to eye” transformation here?
  3. In which space should the distance calculation happen then? A kind of “world coordinates but translated relative to eye” thing?

Is there a way to access the “encoded cartesian” of a point cloud vertex in a custom shader? I mean, somehow, all point cloud vertices’ positions are very precise, so how can I access those values in the same precision to make my distance calculation?

I hope you can help me. Maybe there are already conceptual problems in the steps I already tried? Maybe this is all much easier with RTE. Here a GIF of the precision problem: