a few subterranean questions

Hi,

I'm interested in evaluating Cesium for some subterranean
visualisation tasks. Things I'm interested include e.g. in a user
defined region pushing the earth surface down in altitude so that
custom geometry can be displayed below the surface of the earth.

I've been able to build and run some samples locally. Some queries I
have so far are...
- Am I correct in understanding that imagery_layers will support e.g.
tiled png image overlays with transparency as generated by maptiler
[0] etc. Is there a sample in sandcastle that demonstrates that? I
saw a non-tiled image example in sandcastle 'materials' (for which the
texture wasn't working for me when I run locally). I also noticed
OpenStreetMapImageryProvider.js which seemed to be forming a url in
the appropriate way, but didn't see an example of usage as an overlay.
- Am I correct in understanding terrain is tessellated and displaced
on the javascript side rather than in a vertex shader as per
webglearth? I figured it would still be possible to push an area down
in altitude via the vertex shader. The intent would be that it could
then be interactive without having to rebuild terrain. I tried
modifying e.g. position3DWC in CentralBodyVS.glsl but that didn't
appear to do what I expected.
- Is there an approximate timeline for when terrain and imagery_layers
will be merged into master?
- I noticed in the terrain sample there's often a step edge around
where land meets sea. Is this just due to the altitude data used?
- Assuming earth geometry can be pushed out of the way easily, what
other problems do you foresee in rendering geometry several km below
the earth surface?

Thanks!
Chris

[0] - http://www.maptiler.org/

Hi Chris,

I can answer a few of these. Kevin and Scott will probably provide more details.

Am I correct in understanding that imagery_layers will support e.g. tiled png image overlays with transparency as generated by maptiler [0] etc. Is there a sample in sandcastle that demonstrates that?

I believe so. You would use SingleTileProvider. I don’t know if there is an example with overlaps yet, but there is a Sandcastle example with just a single image. When running a local web server, browse to http://localhost:8080/Apps/Sandcastle/index.html?src=Imagery.html

I saw a non-tiled image example in sandcastle ‘materials’ (for which the texture wasn’t working for me when I run locally).

Where you running with a local web server? If not, there are security restrictions that prevent us from reading images from the local filesystem. However, there is a web server in our build script. From the Cesium root directory, run

./Tools/apache-ant-1.8.2/bin/ant runServer

Then browse to http://localhost:8080/ and the images should appear. If your png files are not very large, e.g., 256x256 or 1024x1024, then overlapping and ordering several Polygon primitives each with an Image material for its png file is a lightweight way to go. However, currently the Polygon primitive doesn’t conform to terrain, just the WGS84 ellipsoid.

I also noticed OpenStreetMapImageryProvider.js which seemed to be forming a url in the appropriate way, but didn’t see an example of usage as an overlay.

There is is Sandcastle example for using the provider. When running your local web server, browse to http://localhost:8080/Apps/Sandcastle/index.html?src=Imagery.html

Am I correct in understanding terrain is tessellated and displaced on the javascript side rather than in a vertex shader as per webglearth?

Yes. Among other issues, if the vertex shader is reading from textures, not all hardware supports that (well WebGL guarantees that it is supported, but the guaranteed minimum number of textures we can read from is zero).

I figured it would still be possible to push an area down in altitude via the vertex shader. The intent would be that it could then be interactive without having to rebuild terrain. I tried modifying e.g. position3DWC in CentralBodyVS.glsl but that didn’t appear to do what I expected.

Kevin will know better than me, but I suspect you’ll be able to displace the vertices along the opposite of their geodetic surface normals.

Is there an approximate timeline for when terrain and imagery_layers will be merged into master?

imagery_layers will come in first, followed by terrain. Since I’m not doing the work, I won’t put words in anyone’s mouth.

I noticed in the terrain sample there’s often a step edge around where land meets sea. Is this just due to the altitude data used?

Kevin can confirm but I suspect this is the nature of the SRTM data used.

Assuming earth geometry can be pushed out of the way easily, what other problems do you foresee in rendering geometry several km below the earth surface?

The terrain code may need some tweaks. For example, to prevent cracks between tiles, we drop down skirts to the WGS84 ellipsoid. If these tiles are below the ellipsoid, we need to be careful not to “drop” skirts up to the ellipsoid, and we may want adjacent tiles that are not subterranean to drop their skirts all the way down to the subterranean tiles, not just the ellipsoid. These are things we need to handle eventually - if we don’t already - to support undersea terrain. I can’t say they are on the short-term radar though, but contributions are always welcome.

I expect most of the other code to work OK under the ellipsoid - all primitives should render, occlusion culling won’t throw objects away, etc. The camera might need some tweaks depending on the view mode. Dan would know for sure.

Regards,

Patrick

Hey Chris, more comments below…

Hi Chris,

I can answer a few of these. Kevin and Scott will probably provide more details.

Am I correct in understanding that imagery_layers will support e.g. tiled png image overlays with transparency as generated by maptiler [0] etc. Is there a sample in sandcastle that demonstrates that?

I believe so. You would use SingleTileProvider. I don’t know if there is an example with overlaps yet, but there is a Sandcastle example with just a single image. When running a local web server, browse to http://localhost:8080/Apps/Sandcastle/index.html?src=Imagery.html

In the imagery_layers and terrain branches it’s called SingleTileImageryProvider. But for tiles generated by MapTiler, you really want a TileMapServiceImageryProvider. We don’t have one yet, but it would be almost trivial to add. We do have a TMS TerrainProvider, though, and that’s what we use for the SRTM terrain hosted on cesium.agi.com. In fact, I used GDAL2Tiles to tile that terrain data. If you didn’t know, MapTiler is a GUI interface around GDAL2Tiles.

I also noticed OpenStreetMapImageryProvider.js which seemed to be forming a url in the appropriate way, but didn’t see an example of usage as an overlay.

There is is Sandcastle example for using the provider. When running your local web server, browse to http://localhost:8080/Apps/Sandcastle/index.html?src=Imagery.html

You know, I didn’t realize until I read this that OpenStreetMap is basically served via Tile Map Service (TMS). So in that case it’s pretty likely that you can use that provider with the tiles generated by MapTiler. If your imagery only covers an extent, rather than the entire world, set the ‘extent’ property on the associated ImageryLayer to the bounds of your imagery, in radians.

Am I correct in understanding terrain is tessellated and displaced on the javascript side rather than in a vertex shader as per webglearth?

Yes. Among other issues, if the vertex shader is reading from textures, not all hardware supports that (well WebGL guarantees that it is supported, but the guaranteed minimum number of textures we can read from is zero).

Assuming you’re talking about reading a heightmap image in the vertex shader and using the read height to displace the vertex, Cozzi is right. The other problem is precision when zoomed in close. The vertex shader can easily compute a longitude, latitude, and height, but then it must do additional math to transform that to Cartesian X, Y, Z coordinates. The GPU’s single-precision floating-point numbers lack the precision necessary to do that for high-resolution terrain. In the future, we may use this technique for the low-detail terrain tiles and then switch to regular CPU-displaced (in a web worker) vertices for the high-detail tiles or when the hardware doesn’t support Vertex Texture Fetch. This should be a nice memory-saving feature.

Your comment about WebGL Earth makes me think maybe we don’t know exactly what you’re referring to, though. I was under the impression that those guys stream terrain data as heights encoded in JSON. Do they then convert that JSON height data to an image so that they can do VTF in the vertex shader?

I figured it would still be possible to push an area down in altitude via the vertex shader. The intent would be that it could then be interactive without having to rebuild terrain. I tried modifying e.g. position3DWC in CentralBodyVS.glsl but that didn’t appear to do what I expected.

Kevin will know better than me, but I suspect you’ll be able to displace the vertices along the opposite of their geodetic surface normals.

Yes, that’s true. But the problem you ran into is probably that the 3D view doesn’t actually use position3DWC because it would jitter when zoomed in close. Instead, it uses the position3D, which is the vertex position relative to the center of the terrain tile, and transforms that to the screen using u_modifiedModelViewProjection. Take a look at “getPosition3DMode” in CentralBodyVS.glsl.

Is there an approximate timeline for when terrain and imagery_layers will be merged into master?

imagery_layers will come in first, followed by terrain. Since I’m not doing the work, I won’t put words in anyone’s mouth.

I can’t make any promises, but it’s my goal to get imagery_layers into master ASAP, hopefully within a few weeks. Terrain may be awhile longer.

I noticed in the terrain sample there’s often a step edge around where land meets sea. Is this just due to the altitude data used?

Kevin can confirm but I suspect this is the nature of the SRTM data used.

Yes, it’s a problem with the terrain data on cesium.agi.com, which is currently just a proof of concept. There are two problems I’m aware of. One is that we’re treating voids incorrectly. The other is that the data is supposed to be relative to mean sea level, but we’re treating it as if it’s relative to the WGS84 ellipsoid. These problems will be fixed before we go live with terrain support, of course.

Assuming earth geometry can be pushed out of the way easily, what other problems do you foresee in rendering geometry several km below the earth surface?

The terrain code may need some tweaks. For example, to prevent cracks between tiles, we drop down skirts to the WGS84 ellipsoid. If these tiles are below the ellipsoid, we need to be careful not to “drop” skirts up to the ellipsoid, and we may want adjacent tiles that are not subterranean to drop their skirts all the way down to the subterranean tiles, not just the ellipsoid. These are things we need to handle eventually - if we don’t already - to support undersea terrain. I can’t say they are on the short-term radar though, but contributions are always welcome.

Currently we always drop skirts down rather than connecting them to the ellipsoid, and the skirt length is a function of the estimated geometric error of the level. So I don’t expect to see problems with cracking with terrain below the ellipsoid.

Kevin

Thanks for the prompt replies. I'll look a bit further. Just a
couple more points...

- To clarify, I'm actually interested in visualising below the earth
crust for e.g. geothermal etc, though there may be some ocean floor
work as well.
- The texture problem I was having was due to the texture panel being
geolocated in the USA. I'm located in Australia so it was night time
in the USA when I was testing. This makes most of the 'materials'
examples turn black :wink: Maybe it's worth exposing the illumination
toggle for that example?
- I was having a look around beneath the terrain so noticed the skirts
you refer to. Is there another sphere rendered as black that sits
below the surface that I'd also need to displace? What I'm seeing may
just be the way the atmosphere renders.
- The webglearth example I was looking at uses 16 bit tiled images for
terrain encoded into red and green of a png [0]. This is then used by
the vertext shader [1]. They did mention the limited support for
vertex shader texture access somewhere.

Thanks,
Chris

[0] http://srtm.webglearth.com/srtm/2/1/1.png
[1] https://github.com/webglearth/webglearth/blob/master/we/shaders/earth-vs.glsl

As a follow on, I've been able to make a bit of progress with pushing
the ground down (see attached image). I've only noticed a couple of
artefacts so far...

- Tile on bottom right of image is being frustum culled since the
un-displaced tile is outside the view frustum. In the past I've used
a modified frustum (eg greater fov and/or push it back slightly) for
culling than for rendering to get around this sort of issue.
- Atmosphere stops at un-displaced globe surface resulting in a black
band on the horizon. I assume this should be straightforward for me
to address.

This is the modification I made to CentralBodyVS. Is it ok to be
using position3DWC to generate a normal since position3D is in tile
space? Is the offset I'm applying in metres?

    vec4 getPosition3DMode(vec3 position3DWC)
    {
        float os = -10000.0; // altitude push (units?)
        vec3 geodeticNormal = normalize(position3DWC); // use incoming
position as geodetic normal
        vec2 txCoord = czm_ellipsoidWgs84TextureCoordinates(geodeticNormal);

        float mx = 0.906; // cutoff point in texture coords
        v_push = smoothstep(mx, mx + 0.00005, txCoord.x); // to push
or not to push
        vec3 pmod = position3D + v_push * geodeticNormal * os;

        return u_modifiedModelViewProjection * vec4(pmod, 1.0);
    }

Btw it might be more intuitive (for me at least :slight_smile: ) if the
heightScale/heightOffset values in the terrain providers were
multipliers rather than divisors. At the moment you make the scale
smaller to get larger mountains.

Hi Chris,

As a follow on, I’ve been able to make a bit of progress with pushing

the ground down (see attached image). I’ve only noticed a couple of

artefacts so far…

Very cool. Thanks for sharing the screen shot, and I’m glad you’re making progress.

  • Tile on bottom right of image is being frustum culled since the

un-displaced tile is outside the view frustum. In the past I’ve used

a modified frustum (eg greater fov and/or push it back slightly) for

culling than for rendering to get around this sort of issue.

The culling happens in the isTileVisible function in EllipsoidSurface. It should be straightforward to modify that to use a larger volume for culling. Alternatively, you could adjust the ‘tile.boundingSphere3D’ for account for your displacement.

  • Atmosphere stops at un-displaced globe surface resulting in a black

band on the horizon. I assume this should be straightforward for me

to address.

I don’t know offhand how to adjust the atmosphere to fill lower on the horizon (Cozzi?), but if you’d like to turn it off in the meantime you can do that setting the showGroundAtmosphere property on the CentralBody to false.

This is the modification I made to CentralBodyVS. Is it ok to be

using position3DWC to generate a normal since position3D is in tile

space? Is the offset I’m applying in metres?

vec4 getPosition3DMode(vec3 position3DWC)

{

    float os = -10000.0; // altitude push (units?)

    vec3 geodeticNormal = normalize(position3DWC); // use incoming

position as geodetic normal

    vec2 txCoord = czm_ellipsoidWgs84TextureCoordinates(geodeticNormal);



    float mx = 0.906; // cutoff point in texture coords

    v_push = smoothstep(mx, mx + 0.00005, txCoord.x);  // to push

or not to push

    vec3 pmod = position3D + v_push * geodeticNormal * os;



    return u_modifiedModelViewProjection * vec4(pmod, 1.0);

}

Yes, using position3DWC is reasonable, and positions are in meters. The only danger is you might get some jittering at close zoom levels. In other words, the displaced vertex positions will bounce around slightly as the camera moves because the normals move slightly due to lack of precision.

The other thing I noticed about this code is that you’re actually displacing along the geocentric surface normal rather than geodetic one. This might be “close enough”, but if you actually want to use the geodetic normal, take a look at Ellipsoid.geodeticSurfaceNormal in Ellipsoid.js or czm_geodeticSurfaceNormal in BuiltinFunctions.glsl.

Btw it might be more intuitive (for me at least :slight_smile: ) if the

heightScale/heightOffset values in the terrain providers were

multipliers rather than divisors. At the moment you make the scale

smaller to get larger mountains.

The idea is that those reflect the scale and offset that were applied to the terrain data when it was generated. The ‘createVerticesFromHeightmap’ web worker then undoes that scale and offset to return the terrain to meters. I can see how that’s confusing, though.

Kevin

Chris,

Answers to a few of your questions below. I’m sure Kevin will handle the more terrain specific ones.

I’m located in Australia so it was night time in the USA when I was testing. This makes most of the ‘materials’ examples turn black :wink: Maybe it’s worth exposing the illumination toggle for that example?

We have bigger plans for lighting since, as you noticed, when the sun light is the only light, objects on the other side of the globe turn black. The easiest thing to do for now is to place the sun at the camera position. That is, replace

scene.setSunPosition(Cesium.computeSunPosition(new Cesium.JulianDate()));

with

scene.setSunPosition(scene.getCamera().position);

Is there another sphere rendered as black that sits below the surface that I’d also need to displace? What I’m seeing may just be the way the atmosphere renders.

Right, you are looking at the sky atmosphere. The easiest thing to do is to turn it off for now. The sky atmosphere is an ellipsoid scaled to be 1.025 the size of the WGS84 ellipsoid. The back faces are rendered, and a ray is shot through the atmosphere in the vertex shader. With a bit more work, you should be able to determine if atmosphere is above the terrain or not, but I don’t have a solid recommendation yet. Here is the original work on the atmosphere - http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter16.html

Tile on bottom right of image is being frustum culled since the un-displaced tile is outside the view frustum. In the past I’ve used a modified frustum (eg greater fov and/or push it back slightly) for culling than for rendering to get around this sort of issue.

I’d suggest increasing the radius of the tile’s bounding sphere, so that it still encompasses the tile after displacement. You can probably adjust the center too, so the sphere fits tightly. I believe this will require some sphere changes throughout the quad tree. Kevin can advise.

Regards,

Patrick

Thanks Kevin & Patrick,

It was straightforward to address the culling issue with the approach
you suggested.

At the moment I'm ignoring the atmosphere/horizon issue. Maybe you've
already considered using an approach that also applies a distance
attenuation (fog) to scene geometry? If the current atmosphere model
doesn't support it, this one [0] does a nice job.

I'm really impressed with what I've seen so far of Cesium in terms of
functionality, documentation, license and support. I'm going to be
offline for a week or so but will hopefully be back to pester you with
more questions after that :slight_smile: Maybe in the meantime someone can tell
me how to set a view as per the 'camera reference frames' example, but
retain the ability to pan across the earth rather than be locked
looking at the one location? I tried addFlight() but that seems to
end up looking straight down. I'll also be interested in seeing where
the layered imagery is up to when I get back.

I'm sure you've already seen it, but in case you haven't the nokia
webgl demo [1] is worth a look. Click on one of the city names and
have a look around. An investigation of their geometry tile technique
[2] sounded similar to something you mention in the Cesium terrain
roadmap.

Cheers,
Chris

[0] http://www-evasion.imag.fr/Membres/Eric.Bruneton/#atmo
[1] http://maps.nokia.com/webgl/
[2] http://idiocode.com/2012/02/01/nokia-3d-map-tiles/

Chris,

Maybe you’ve already considered using an approach that also applies a distance attenuation (fog) to scene geometry? If the current atmosphere model doesn’t support it, this one [0] does a nice job.

I’ve looked a bit at Eric Bruneton’s atmosphere work. IIRC it requires some precomputed lookup tables that we’d rather not have. I have to take a closer look to be sure, so don’t take my word for it. I also looked at the recent atmosphere work in GPU Pro 3, but don’t have enough bandwidth at the moment (but contributions are welcome, of course). As for fog, we are planning a post-processing framework for fog, ambient occlusion, motion blur, depth of field, glow, etc. We did some hacks during a hackathon, which are in a branch, but they are not ready for use. We hope to have something by the end of the year.

I’m really impressed with what I’ve seen so far of Cesium in terms of functionality, documentation, license and support.

Awesome, thanks. We actually have a long way to go with the reference doc, but we are getting there. We choose the license to reach the largest number of people; we are not interested in restricting use.

Maybe in the meantime someone can tell me how to set a view as per the ‘camera reference frames’ example, but retain the ability to pan across the earth rather than be locked looking at the one location?

Perhaps this will do the trick:

var extent = new Cesium.Extent(west, south, east, north);

scene.viewExtent(extent, Cesium.Ellipsoid.WGS84);

An investigation of their geometry tile technique [2] sounded similar to something you mention in the Cesium terrain roadmap.

I haven’t read this, but they also have a video from WebGL Camp Europe as do we (on CZML, not tiling).

Regards,

Patrick

Thanks for those links Patrick, it was good to get a bit more
background on the projects.

I managed to get my cameras fixed by using camera.lookAt() but without
constraining the spindle (previously I was going directly from the
camera reference frames example). I also noticed logging of
pos/dir/up in the ImageryTest.html which should help me set up some
reference camera angles.

Kevin, I managed to get a MapTiler based image layer working using the
latest imagery_layers branch and setting up a
TileMapServiceImageryProvider.js. The only differences from
OpenStreetMapImageryProvider.js were...
- flipping the y tile index
- limiting the extent

At the moment I set up a separate _extent member var as I wasn't sure
whether it was safe to modify the one in _tilingScheme. I've attached
the code and an example for your consideration though I thought it
might make more sense to combine the code of TMS and OSM. For this
reason I have only done the minimum effort required to get it working.

Cheers,
Chris

maptiler_support.tar.gz (95.9 KB)

Also, is it possible to determine an imagery layer id within e.g.
sampleAndBlend() of CentralBodyFS.glsl? e.g. I have a Bing map layer,
and a custom TMS layer overlayed and I want to apply a fragment shader
operation just to the TMS layer. I have a workaround at the moment
based on texture ids but it has some rendering artefacts.

Thanks,
Chris

Hi Chris,

Kevin, I managed to get a MapTiler based image layer working using the

latest imagery_layers branch and setting up a

TileMapServiceImageryProvider.js. The only differences from

OpenStreetMapImageryProvider.js were…

  • flipping the y tile index

  • limiting the extent

At the moment I set up a separate _extent member var as I wasn’t sure

whether it was safe to modify the one in _tilingScheme. I’ve attached

the code and an example for your consideration though I thought it

might make more sense to combine the code of TMS and OSM. For this

reason I have only done the minimum effort required to get it working.

Very cool, thanks for sharing it. Would you be willing to sign our Contributor License Agreement (CLA)? It essentially says that you own your contributions and that we’re allowed to use them. You can find the one for individuals here (http://www.agi.com/products/licensing-and-evaluation-options/individual-cla-agi-v1.0.txt) and for corporations here (http://www.agi.com/products/licensing-and-evaluation-options/corporate-cla-agi-v1.0.txt). If you’re willing and able to sign it, I’ll incorporate your new class into the imagery_layers branch.

Having a separate _extent probably makes sense. It’s actually valid to modify both the extent of the tiling scheme and the extent of the imagery provider in different circumstances. If your extent defines the bounds of the root tile(s), you should modify the tiling scheme’s extent. If, instead, the root tile(s) cover the entire world but only tiles within certain bounds actually exist, you should add an extent to the imagery provider. I think the TMS tiling scheme is always global, correct? If so, you did the right thing.

Regarding the overlap between TMS and OSM, I think it may be worthwhile to keep separate classes. One reason is that it makes it more likely that folks will realize that OSM support exists and be able to get it working. Another reason is that I believe there are additional, more subtle differences between the two. For example, TMS has a tilemapresource.xml file that describes the valid extent, number of levels, etc. With OSM, as far as I can tell, that is all implicit. So even though the two are very similar in their simplest forms, they’ll diverge a bit as they evolve.

Kevin

Hi again Chris,

Hi Kevin,

The change to the fragment shader was straightforward to implement
with your advice and is now working well. I'm in the process of
getting a corporate CLA signed. Hopefully it won't take too long.

Thanks,
Chris