Apparently, doing a full Draco compression *does* affect the indices. I tested it with a simple model earlier, and there, the indices remained the same. But with the attached model, they are modified. (It might magically depend on the complexity of the model…? And … the new indices appear to be stored something that resembles a sparse accessor (but is not) … I’d have to read more here…)

However…

The original model has ~46.2 MB, and a small segment of it looks like this:

With `compressionLevel=0`

, the model has ~16.8 MB, and looks like this:

One can see that the *topology* is right, but with the lines are bit squiggly due to the quantization. The default quantiztation for the positions is **11 bits** (the README currently says 14, but it’s 11 - that has to be fixed).

So I also tried it with 14 quantization bits for the positions, and this results in ~18 MB, looking like this:

At this perspective, that’s hardly distinguishable from the original one.

With `compressionLevel=1`

, it goes down to ~13.1 MB, but … the indices are messed up:

But… this might not be toooo surprising. (In fact, I found it surprising that the indices did *not* seem to change when I tested it with a simpler model…)

Of course, the linked PR is only a slight improvement, compared a full re-write of the (changed) indices (but… it only took a few minutes to implement it, so that’s OK I guess). But when setting `compressionLevel=0`

, it should now be possible to transport the outline extension through `gltf-pipeline`

Draco compression, and this still reduces the size considerably.

Attached is the one with `compressionLevel=0`

, and `quantizePositionBits=14`

.

test13-draco-compressionLevel-0-14bit.glb (17.6 MB)