Is ContextCapture generated 3d-tiles model different from others?

Hi everyone

I have a ContextCapture generated 3d-tiles that looks great and smooth in Cesium.

Then I use ContextCapture to generate obj models of the same scene and convert obj to b3dm myself by using obj2gltf and gltf-pipeline.

However, my self generated 3d-tiles is not as smooth as the one ContextCapture generated.

You can see from the pictures below, the upper one is the 3d-tiles I generated, it looks like many triangles.

The ContextCapture-generated one of the scene is below, and it looks smooth.

I wonder why this happened?

Any help appreciated.


When using obj2gltf or gltf-pipeline are you passing in any flags that cause normals to be generated?

I believe the models produced by ContextCapture do not have normals by default, which is actually good for photogrammetry models where lighting is already part of the texture. So if you are passing in any generate normals flags, try without.

Thanks Sean

I think I find the problem.

Though I didn’t pass any normal flag, my obj is processed through blender.

Blender may add normals to the model when it is exported.

I will try to remove normals or smooth model.

在 2017年7月11日星期二 UTC+8上午7:17:56,Sean Lilley写道:

Hi Chris Wang

I wanna ask other questions about how to convert gltfto b3dm…

I find that you use gltf-pipeline to convert to b3dm,but in github ,gltf-pipeline looks cannot convert gltf to b3dm,you can see from the pictures below

so can you tell me specifically the entire process?

Thank u very much.


在 2017年7月8日星期六 UTC+8下午7:05:30,Chris Wang写道:

You can use gltf-pipeline to convert to glb, and then use 3d-tiles-tools to convert to b3dm

This is just a straight conversion and does not produce a tileset.json or subdivide the data in any way.

It works! Thanks Sean!

Actually what I wanna do is to subdivide one big 3D model data(500mb 3dmax data) to 3D Tiles. because I have seen that Cesium 3D Tiles has ability to generate 3D models with level-of-detail.

The reason I wanna to transform because when I load my high resolution 3D models from simple photographs (size:140G) over high sight, it occupies very high memory(memory using continuously rising,then crash). Then we realize it because when we produce data using contextcapture, the first level of lod may set not small enough(now the first level is 16), we have to reproduce the data to make more levels(use bentley CC to generate 3D tiles need very high hardware configuration and also plenty of time) and it seems that this is the only way to solve the problem.

Besides loading 3D models from simple photographs, we also wanna load other 3D models like BIM, but loading massive data one-time , similar problem must arise.So I wanna subdivide 3D model data using 3D tiles.Can u give me some suggestions?

Thans a lot!

在 2017年9月26日星期二 UTC+8上午6:56:23,Sean Lilley写道:,

Being able to subdivide massive models is one area that the Cesium team is actively working on. The process can become very involved especially when dealing with decimation and choosing good tile splits. The simplest solution would be to subdivide your data into a grid, but this approach has many downsides and won’t run very well.

If you are interested in having us tile your data, check out and We have had a good amount of experience with tiling huge ContextCapture models.