Hey Chris, more comments below…
Hi Chris,
I can answer a few of these. Kevin and Scott will probably provide more details.
Am I correct in understanding that imagery_layers will support e.g. tiled png image overlays with transparency as generated by maptiler [0] etc. Is there a sample in sandcastle that demonstrates that?
I believe so. You would use SingleTileProvider. I don’t know if there is an example with overlaps yet, but there is a Sandcastle example with just a single image. When running a local web server, browse to http://localhost:8080/Apps/Sandcastle/index.html?src=Imagery.html
In the imagery_layers and terrain branches it’s called SingleTileImageryProvider. But for tiles generated by MapTiler, you really want a TileMapServiceImageryProvider. We don’t have one yet, but it would be almost trivial to add. We do have a TMS TerrainProvider, though, and that’s what we use for the SRTM terrain hosted on cesium.agi.com. In fact, I used GDAL2Tiles to tile that terrain data. If you didn’t know, MapTiler is a GUI interface around GDAL2Tiles.
I also noticed OpenStreetMapImageryProvider.js which seemed to be forming a url in the appropriate way, but didn’t see an example of usage as an overlay.
There is is Sandcastle example for using the provider. When running your local web server, browse to http://localhost:8080/Apps/Sandcastle/index.html?src=Imagery.html
You know, I didn’t realize until I read this that OpenStreetMap is basically served via Tile Map Service (TMS). So in that case it’s pretty likely that you can use that provider with the tiles generated by MapTiler. If your imagery only covers an extent, rather than the entire world, set the ‘extent’ property on the associated ImageryLayer to the bounds of your imagery, in radians.
Am I correct in understanding terrain is tessellated and displaced on the javascript side rather than in a vertex shader as per webglearth?
Yes. Among other issues, if the vertex shader is reading from textures, not all hardware supports that (well WebGL guarantees that it is supported, but the guaranteed minimum number of textures we can read from is zero).
Assuming you’re talking about reading a heightmap image in the vertex shader and using the read height to displace the vertex, Cozzi is right. The other problem is precision when zoomed in close. The vertex shader can easily compute a longitude, latitude, and height, but then it must do additional math to transform that to Cartesian X, Y, Z coordinates. The GPU’s single-precision floating-point numbers lack the precision necessary to do that for high-resolution terrain. In the future, we may use this technique for the low-detail terrain tiles and then switch to regular CPU-displaced (in a web worker) vertices for the high-detail tiles or when the hardware doesn’t support Vertex Texture Fetch. This should be a nice memory-saving feature.
Your comment about WebGL Earth makes me think maybe we don’t know exactly what you’re referring to, though. I was under the impression that those guys stream terrain data as heights encoded in JSON. Do they then convert that JSON height data to an image so that they can do VTF in the vertex shader?
I figured it would still be possible to push an area down in altitude via the vertex shader. The intent would be that it could then be interactive without having to rebuild terrain. I tried modifying e.g. position3DWC in CentralBodyVS.glsl but that didn’t appear to do what I expected.
Kevin will know better than me, but I suspect you’ll be able to displace the vertices along the opposite of their geodetic surface normals.
Yes, that’s true. But the problem you ran into is probably that the 3D view doesn’t actually use position3DWC because it would jitter when zoomed in close. Instead, it uses the position3D, which is the vertex position relative to the center of the terrain tile, and transforms that to the screen using u_modifiedModelViewProjection. Take a look at “getPosition3DMode” in CentralBodyVS.glsl.
Is there an approximate timeline for when terrain and imagery_layers will be merged into master?
imagery_layers will come in first, followed by terrain. Since I’m not doing the work, I won’t put words in anyone’s mouth.
I can’t make any promises, but it’s my goal to get imagery_layers into master ASAP, hopefully within a few weeks. Terrain may be awhile longer.
I noticed in the terrain sample there’s often a step edge around where land meets sea. Is this just due to the altitude data used?
Kevin can confirm but I suspect this is the nature of the SRTM data used.
Yes, it’s a problem with the terrain data on cesium.agi.com, which is currently just a proof of concept. There are two problems I’m aware of. One is that we’re treating voids incorrectly. The other is that the data is supposed to be relative to mean sea level, but we’re treating it as if it’s relative to the WGS84 ellipsoid. These problems will be fixed before we go live with terrain support, of course.
Assuming earth geometry can be pushed out of the way easily, what other problems do you foresee in rendering geometry several km below the earth surface?
The terrain code may need some tweaks. For example, to prevent cracks between tiles, we drop down skirts to the WGS84 ellipsoid. If these tiles are below the ellipsoid, we need to be careful not to “drop” skirts up to the ellipsoid, and we may want adjacent tiles that are not subterranean to drop their skirts all the way down to the subterranean tiles, not just the ellipsoid. These are things we need to handle eventually - if we don’t already - to support undersea terrain. I can’t say they are on the short-term radar though, but contributions are always welcome.
Currently we always drop skirts down rather than connecting them to the ellipsoid, and the skirt length is a function of the estimated geometric error of the level. So I don’t expect to see problems with cracking with terrain below the ellipsoid.
Kevin