Get view direction for fragmentShader

I’ve been playing around with using Shadertoy shaders in Cesium. Obviously Shadertoy is arranged around the idea of a 2D viewport so this creates some tricky things in porting them over. But I’ve managed to get some things working in Cesium that surprised me, though there are a few things I’d need to know to really get the effect I want.

Here’s an example of something I managed to make mostly work. It is a derivation of this Shadertoy app.

As you can see, when you zoom in and out, and pan, etc., the perspective of the shader does not change at all, which is jarring — it’s clear it is just being projected on.

How to resolve that? My guess is that I could pass info to the shader that would cause it to rotate and zoom the texture appropriately to match how it looks in the Cesium viewport. Where would I even start with that approach? Is there a simple way to do this that I am missing, or would it require me to set a uniform based on the camera and try to work from that?

I’ve looked for examples but not found anything that seemed to do what I was trying to do. But it seems like it must be possible, no? I would appreciate any guidance, etc. Thank you! I almost have this doing something quite amazing (obviously not with a hot air balloon model) so I am quite excited by the possibilities.

Hi @Nucleon,

This is certainly an interesting question, thinking about raymarching on a 3D primitive other than a screen space quad. I don’t think I have a clear answer of how that would work, but I have some ideas that might help you.

First of all, the screen space quad is pretty important to the typical raymarching technique, for two reasons:

  1. It’s a very convenient geometry. the quad is perfectly flat, and it’s easy to define the ray direction at any point on the surface. If you’re rendering on something else, such as an arbitrary triangle mesh, it’s much more involved to determine which direction the ray should point. This makes me wonder if a large box or cylinder that completely surrounds where you want to put the SDF would be helpful, since those would be easier to compute ray directions for.
  2. The quad completely covers your field of view. There’s no boundary (except for the boundary of the screen as a whole) so it gives the illusion that the 3D scene goes on forever. Now imagine rendering to a quad that’s only a small rectangle in a larger scene. Now there’s a boundary where the quad stops, so it would be easy for the user to find a camera angle where the SDF goes out-of-bounds, thus breaking the illusion. You might need a very careful scene layout to avoid this.

Another (perhaps more practical) idea is to avoid the raymarching part and keep the SDF. You could define the SDF in model space (You can get the model space coordinates via fsInput.positionMC), and then regardless of the camera view, wherever that SDF intersects the boundary of the mesh, that’s what you’ll see. This is essentially just a volumetric approach to procedural texturing. Certainly there’s limitations, as you only see the SDF where it intersects the geometry, but it’s a lot easier to implement than raymarching proper. Also, for things like clouds/volumetric effects that rely on raymarching, you’d have to find another way to think about them.

Best,
Peter

Thanks. I’m not sure I understand enough of the underlying terminology to totally understand this (I’ll have to google “screen space quad”), but it’s a good place to start!

The perhaps naive approach I was going to try was to figure out what orientation was necessary to align the internal representation to the camera, and then basically do some kind of inverse of that when rendering the material, so the faux-3D image in the material is always at the right orientation. But I haven’t worked out the geometry for that. It seems feasible, though, with the right material shader. I would also then figure out, based on the distance between the camera and the model, how I would need to scale the shader to make it fit the model “viewport” or whatever. (Which for this case is pretty easy since it doesn’t need to exactly match the edges of the model, since it is cloudy/fiery.)

I had one dumb question that I thought I would ask here while I have you. :slight_smile:

From what I can tell, there is no way for a material shader to get information from the scene itself, right? You couldn’t make a mirror texture, because that would require knowing what was around the model other than just the direction of its lighting. You also couldn’t do something like showing light diffusing through a substance other than playing with the opacity — you couldn’t, for example, simulate the distorting effect that hot air has on light (heat distortion, heat waves, etc.), at least with regard to the surrounding imagery (you could do something entire “within” the material shader, but it would have no obvious way to reference external materials — the terrain, the OSM models, etc.).

Am I right about the impossibility of the above? It is not a deal-breaker but it would be kind of incredible if one could do these things, but I’ve seen nothing in the documentation or examples to suggest that one can, so I assume one can’t.

(Apologies if I use the wrong terminology above. Assume I don’t know the precise way to express these things… because I don’t! :slight_smile: All of this is in the service of trying to render somewhat plausible-looking volumetric clouds models in Cesium. I feel like I am close to coming up with things that look pretty good, as a combination of vertex and material shaders, essentially faking it.)

Hi @Nucleon,

Ah sorry for the confusion, by “screen space quad”, I just meant a quad that spans the whole viewport (which is often the entire screen). Interestingly, I searched through the CesiumJS codebase and it seems the term “viewport quad” is also used.

In regards to what you said about keeping track of the distance between the camera and the model, I do think that might be relevant. E.g. if you knew the center of the model in eye (aka view) coordinates, you could use that as an origin for your SDF in the fragment shader. However, to do so would require the usual view and projection matrices which are currently not exposed to the public API (see this issue).

Speaking of which, if you ever did want to make a raymarcher that spans the whole screen in CesiumJS, the PostProcessStage is one option.

Unfortunately CesiumJS doesn’t have a way to capture surrounding materials like you’re describing. That said, that’s an interesting concept to think about. One hypothetical way it might work (albeit somewhat expensive):

  1. To get color information about what’s nearby, one might render 6 views of the scene to a cubemap texture. Unfortunately, as far as I know CesiumJS doesn’t have a built-in way to do this sort of rendering
  2. If you just need to know what objects are nearby, instead of rendering the scene in color, you’d render each object in the scene with a different color to identify them uniquely. CesiumJS uses this sort of technique for picking and styling internally, but there’s no public API for this.

Now, if you just wanted to identify different materials within a single glTF or 3D Tiles, we do support the use of feature IDs (see the proposed EXT_mesh_features glTF extension) in CustomShader, see for example this Sandcastle.