3D model performance tool


We just wrote a small command-line tool, gltf-statistics, for outputting key performance statistics about a glTF:


Later, we will integrate these stats into our converter so they are automatically displayed when we drag and drop a model for conversion.


Also, if anyone knows Node.js and npm, and wants to tell me if I organized the package correctly, I am receptive to feedback.



Awesome, so this yields some of the same stats that ctrl-shift-A does in Google Earth? (which toggles a readout at the bottom middle.)

I was reading this


"Another perspective that motivates glTF is that 3D is the last media type without a standard codec. Audio has mp3. Video has H.264.

Images have png and jpg. What does 3D content have? The variety of use cases and complexity of 3D asset types have left 3D without a standard codec."

This makes alot of sense. 3D models on web-pages could become as ubiquitous as images and videos if only there was a common format.

I haven’t tried gltf-statistics just yet, but looking over the readme.md it would appear to be for performance checking on individual models. I collected some data from GE for render speed, though it’s of the Earth instead of individual models. Cesium is at least on par with Google Earth for geometrically dense mountainous areas, though I can’t render more than 60fps on this computer. This is with GE standalone which uses C compiled to machine code, I’m not sure how to do enable stat checking on GE WebGL version.

Geometrically dense mountainous area


621 draw() calls

252k triangles, 867.3k verticies

380 textures, 320.44 MB VRAM

3D buildings turned on in a major city


1761 draw() calls

392.8k triangles, 2858.6k verticies

1212 textures, 539.47 MB VRAM

3D buildings turned on at the southern tip of Manhattan looking north pitched slightly down


4323 draw() calls

1633.7k triangles, 15879.4k verticies

3387 textures, 1333.20 MB VRAM

(Disneyland Paris is probably as bad if not worse with 3d buildings enabled.)

While 3D buildings are awesome, they can be quite stressful for a computer to render. Though this is using a computer from 2010 which was only mediocre back then.

Something that these stats don’t show is how long it takes to transmit these models from the server to the client. I didn’t time it,but I’d estimate 10 to 20 seconds. But I’m not sure if that’s a hard drive bottleneck, a network bottleneck, or maybe even a CPU processing bottleneck.

Having an efficient method to transmit models and process them to the GPU will become very important as the number of models greatly increases, paramount when Cesium does feature 3D buildings.

BTW I have no idea why this forum turned readme into a link, I simply typed readme dot md.


so this yields some of the same stats that ctrl-shift-A does in Google Earth?

Exactly, but for an individual model as you said.

though I can’t render more than 60fps on this compute

Some useful tips are here: http://cesiumjs.org/2014/12/01/WebGL-Profiling-Tips/


Thanks for the link Patrick. Disabling vsync is a good way to test computer performance without having to draw the complete frame on the display. Just found out that Disneyland Paris FPS is worse than Manhattan: 10fps, 10078 draw() calls, 2075.3k triangles, 23638.1k verticies, 7429 textures, 392.33 MB VRAM (vram seems off.)

Hi Patrick,

Thank you! This is a great tool that will certainly help us in our search for better performance of complex models (converted from GE .kmz files).

For node.js newbies like myself you may want to add that you also need to install minimist (npm install minimist).

Thanks, Willem

Thanks Willem. minimist is listed in the the package.json as a dependency so it should be included when you do an npm install for gltf-statistics.