Looking for DrawCommand rendering in 2D/Columbus view

In the following DrawCommand code, a red triangle is properly rendered at the right position, Washinton, DC.
But when changing the view from 3D to 2D or Columbus, the red triangle is rendered at wrong position, mid of Atlantic Ocean, and the triangle shape is distorted.

View in 3DView in 3D

View in 2D

View in Columbus

I guess the reason is the transformation matrix used in the vertex shader(GLSL), czm_modelViewProjection.
It should be changed to something else for 2D/Columbus view, but I have no idea.

I have looked for a sample code using DrawCommand and work well both in 3D and 2D, but as far now, I can’t find anything.

Your kind advice would be highly appreciated.

var viewer = new Cesium.Viewer('cesiumContainer');
var scene = viewer.scene;

const basePosition = [ -77., 39.];  // Washington, DC
let sizing = 10.
let position0 = Cesium.Cartesian3.fromDegrees(basePosition[0]-sizing, basePosition[1]-(sizing/2.), 0);
let position1 = Cesium.Cartesian3.fromDegrees(basePosition[0]   , basePosition[1]+(sizing/2.), 0);
let position2 = Cesium.Cartesian3.fromDegrees(basePosition[0]+sizing, basePosition[1]-(sizing/2.), 0);
let positions = [

scene.primitives.add(new MyPrimitive());

function MyPrimitive() {
    this.drawCommand = undefined;

MyPrimitive.prototype.update = function(frameState) {
    if (Cesium.defined(this.drawCommand)) {
    var context = frameState.context;
    var vertexBuffer = Cesium.Buffer.createVertexBuffer({
        context: context,
        typedArray: new Float32Array(positions),
        usage: Cesium.BufferUsage.STATIC_DRAW

    var indexBuffer = Cesium.Buffer.createIndexBuffer({
        context: context,
        typedArray: new Uint16Array([0,1,1,2,2,0]),
        usage: Cesium.BufferUsage.STATIC_DRAW,
        indexDatatype: Cesium.IndexDatatype.UNSIGNED_SHORT

    var attributes = [{
        index: 0,
        enabled: true,
        vertexBuffer: vertexBuffer,
        componentsPerAttribute: 3,
        componentDatatype : Cesium.ComponentDatatype.FLOAT,
        normalize: false,
        offsetInBytes: 0,
        strideInBytes: 0
    var vs =
        "attribute vec3 position; \n" +
        "void main() { \n" +
        "    gl_Position = czm_modelViewProjection * vec4(position, 1.0); \n" +
        "} \n";
    var fs =
        "void main() { \n" +
        "    gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); \n" +
        "} \n";
    var shaderProgram = Cesium.ShaderProgram.fromCache({
        context: context,
        vertexShaderSource: vs,
        fragmentShaderSource: fs

    var vertexArray = new Cesium.VertexArray({
        context : context,
        attributes : attributes,
        indexBuffer : indexBuffer

    var renderState = Cesium.RenderState.fromCache({
        cull: {
            enabled: true,
            face: Cesium.CullFace.FRONT
        depthTest: {
            // enabled: true
            enabled: false
        depthMask: true,
        blending: undefined

    var drawCommand = new Cesium.DrawCommand({
        //owner: this,
        vertexArray : vertexArray,
        shaderProgram : shaderProgram,
        modelMatrix : Cesium.Matrix4.IDENTITY,
        renderState :  renderState,
        cull : false,
        primitiveType : Cesium.PrimitiveType.LINES,
        pass : Cesium.Pass.OPAQUE,

    this.drawCommand = drawCommand;

Hi there,

I would recommend checking out PolylineVS.glsl to see how this is handled in CesiumJS internally. I think you’re looking for czm_viewportTransformation and czm_viewportOrthographic.

Thank you for your prompt reply and advice.

I have checked out PolylineVS.glsl.

In the code, czm_viewportOrthographic is certainly used to get gl_Position from window coordinates,
though czm_viewportTransformation is not used at all.

In case of using czm_viewportTransformation, which transform normalized device coordinates to window coordinates, so the code might be look like;

// positionNDC... vec4  normalized device coordinates (ndc) 
gl_Position = czm_viewportOrthographic * czm_viewportTransformation * positionNDC;

Now, my question is what are the steps to transform from model coordinates to normalized device coordinates, and at where View mode should be handled.

As I referred PolylineVS.glsl, it takes completely different steps, such as:

czm_translateRelativeToEye(view mode handling here) → czm_modelViewRelativeToEye → czm_eyeToWindowCoordinates

A model position consists of two EncodedCartesian3 attributes, such as position2D(High/Low) and position3D(High/Low).

Is this the way we should follow?
If yes, then how to make these two EncodedCartesian3 attributes from a model position?

Any help would be appreciated.

If yes, then how to make these two EncodedCartesian3 attributes from a model position?

In order to emulate double precision values for positions, we split position into two different attributes: positionHigh and positionLow. Internally, we use the private class EncodedCartesian3 to get these values.

Thank you for your reply.

My question is not about EncodedCartesian3 itself but about positon2D and position3D.

what a difference between positon2D and position3D ?

I believe these two are originally made from the same single model position. How to make these two?