Integrate CesiumJS with other WebGL renderers into same canvas

Hello all, I am trying to integrate CesiumJS with external WebGL rendering engines like Babylon.js or Three.js. I have read the post Integrating Cesium with Three.js, but it draws Cesium and Three.js into different canvases. What I want is for both CesiumJS and the external WebGL renderers to render into the same canvas so that I can share the same depth buffer for occlusion.

Currently, I have achieved something like this:

// ...

initCesiumScene();
initBabylonScene();

renderLoop() {

    cesiumViewer.render();

    // write cesium depth texture into depth buffer
    // ...

    babylonEngine.wipeCaches(true); // reset WebGL states needed by Babylon.js
    babylonScene.render();

}

// ...

For a simple Babylon standard material, everything is fine, and occlusion between terrain and Babylon meshes looks correct. However, for some complicated materials, the Cesium rendering result gets completely messed up.

I believe that the complicated materials of Babylon are affecting the WebGL states needed by Cesium. Both Three.js and Babylon.js have reset functions (Three.js resetState() and Babylon.js wipeCaches()) that reset the necessary WebGL states and can be used to integrate with external WebGL renderers.

I wonder if CesiumJS offers something similar? Is there a way to reset the WebGL states needed by CesiumJS itself?

Many thanks!

After going through the materials used by this scene, I found that the front face was modified. So resetting it will fix rendering result of CesiumJS.

// ...

initCesiumScene();
initBabylonScene();

renderLoop() {

    gl.frontFace(gl.CCW); // reset front face to conter-clock-wise
    cesiumViewer.render();

    // write cesium depth texture into depth buffer
    // ...

    babylonEngine.wipeCaches(true); // reset WebGL states needed by Babylon.js
    babylonScene.render();

}

// ...

But it seems a reset function is still needed since the material used by external renderers are not always controllable? :rofl: :rofl: :rofl:

I have been looking into this as well. It would be very nice to rely on cesium for the data sources that it does very well, and have access to that mesh/material data within an external display framework like babylonjs.

I am currently experimenting with the cesiumjs package and babylonjs to create a dynamically updated tile set based on camera location. My general idea is to createGooglePhotrealistic3DTileset() and then request their content, and then add event listeners for their completion and such. Once ready, load those gltf into babylon at the positions cesium says.

I’m currently stuck with having to make a scene or the tiles dont download, and getting to the actual gltf and texture data so i can send it to babylon. Also, I imagine my next challenge will be telling a tileset where the camera is. Getting the tile data is one thing, making it beautifully dynamically construct a 3d landscape based on your observation point is another.

I notice projects like unity have this ‘just working’ and they have it in the same output as unity, which Im sure does not sacrifice their canvas to cesium. So when I get frustrated enough Ill swap over to that and see if any understanding comes to me.

Keep me updated with your discoveries.

1 Like

I have a working sphere here, with the meshes coming in. Its not nearly done… and im not sure whether its actually rendering it in the off-dom element or not… but I do have the meshes with textures in coherent positions inside babylon.

I need to adapt it to only show the meshes that correspond to the zoom level im at, and to make the cameras sync. so… hmm, going to mess with that next.

I’ve managed to roughly translate the pos/rot from cesium into babylon. However, there is still a lot of stuff wrong. Getting the tiles to know when to not be shown is an issue, as well as I expect the shader functionality that makes the models look so nice.

this.cesiumScene.camera.setView({
  destination: Cartesian3.fromDegrees(
    this.longitude,
    this.latitude,
    this.altitude,
  ),
});

// Get Cesium camera position in ECEF
var cesiumPosition = this.cesiumCamera.positionWC;
var babylonPosition = new Vector3(
  -cesiumPosition.x,
  cesiumPosition.z,
  -cesiumPosition.y,
);

// Update Babylon.js camera position
camera.position = babylonPosition;

// Get Cesium camera orientation
var direction = this.cesiumCamera.directionWC;
var up = this.cesiumCamera.upWC;

// Cesium direction to Babylon.js rotation
var forward = new Vector3(-direction.x, direction.z, -direction.y);
var upVector = new Vector3(up.x, up.z, -up.y);

// Set the Babylon.js camera orientation
camera.setTarget(camera.position.add(forward));
camera.upVector = upVector;

// Get and set FOV
camera.fov = this.cesiumCamera.frustum.fovy;

// Get and set aspect ratio
var aspectRatio = this.cesiumCamera.frustum.aspectRatio;
camera.aspectRatio = aspectRatio;

// Get and set near and far planes
camera.minZ = this.cesiumCamera.frustum.near;
camera.maxZ = this.cesiumCamera.frustum.far;

I’m going to go see how the unity package does this now… as the shaders positioning and other stuff seems to be seamlessly handled.

Hi @techtruth, I am using CesiumJS with Babylon exactly, however I am not trying to hook into the Cesium workflow to retrieve geometry or material data.

What I am doing is to draw the Cesium scene first, and retrieve the rendering result as a color texture and a depth texture.

To obtain the depth texture, I use:

viewer.scene.context.uniformState.globeDepthTexture

And for the color texture, I create a postProcessStages and retrieve it from the execute callback.

const cesiumStage = cesiumViewer.scene.postProcessStages.add(
    new Cesium.PostProcessStage({
        name: "Cesium",
        fragmentShader: postProcessFS
    })
);
cesiumStage.execute = (context, colorTexture, depthTexture, idTexture) => {
    // get colorTexture
    // ...
});

I hope this helps somehow.

@xiasun Thanks for showing me how to get the depth and color textures out of cesium. The depth textures I can probably use for collision detection of some sort after some transformations.

1 Like

I’ve managed to figure out how to get cesium and babylon rendering to the same canvas.

The key parts in babylon are are setting scene.autoClear to false

The render loop needs to look something like this

babylonScene.getEngine().wipeCaches(true);  //Otherwise WebGL errors
cesiumScene.render();
babylonScene.render();

This draws the cesium details first, then draws the babylon layer on top of it. Occlusion will not work, such that if a cesium object was blocking a babylon object, the babylon object would be on top of the cesium in the end result still… but that shouldn’t be an issue for my need here.

Hello, I am currently working on integrating Cesium and Babylon.js to render on the same canvas. My approach is to let Cesium perform its rendering first. Then, during the final compositing stage (OIT stage), I read the depth from Cesium’s depth texture, convert it to linear depth, and write it into the canvas depth buffer. After that, I prevent Babylon.js from clearing the color and depth buffers, allowing it to continue rendering on top of Cesium’s buffers to achieve correct occlusion effects.

However, I found that the rendering effect is only correct when viewed from a horizontal perspective. When looking up or down (top-down or bottom-up views), the occlusion effect is incorrect.

I would like to know if you have successfully solved the occlusion issue between Cesium and Babylon.js. If so, could you guide me on how to handle it?

Sorry, I have no code in hand to show. My guess is that the depth distribution is not properly converted. You may render the Cesium depth on the left side and the Babylon scene depth on the right side of the screen to check whether they match.