Rendering quality issue with 1.1 tilesets (vibrating textures)

Hello everyone,

Since we started using 1.1 tilesets, we noticed a degradation in the visual quality of the rendered images.

Specifically, we note quite a lot of vibration/aliasing in noisy textures (such as grass for instance). The effect is most sensible when moving the camera.

The issue is illustrated in the following sandcastle, which presents 3 renditions of the same data:

  • 1.1 tileset with ktx compression
  • 1.1 tileset without ktx compression
  • 1.0 (with draco compression)

The last variant (1.0 version) offers a much less noisy, more ā€œstableā€, experience.

We were wondering what could be the cause of this effect, and if something could be done to alleviate it?

Best regards,
David Hodgetts

1 Like

There are a few things coming together here, and some of them may warrant a deeper investigation. For now, a quick summary of points that may be relevant.


I tried to capture the three tilesets. (It’s a GIF, limited to 256 colors - that does not make sense, but the point is that it’s easier to visually compare the final (rendered) state directly)

Cesium Forum 38634 Texture quality

From what I see, there are subtle differences, but I’d have a hard time attributing any concept of ā€œqualityā€ to them (i.e. I couldn’t say which one is ā€œthe best oneā€).


In your sandcastle, you did set
const tilesetOptions = { maximumScreenSpaceError: 1.5 };
This is pretty low compared to the default (16). It may be appropriate, depending on the structure of the data.

This brings up some questions about the tilesets that may be relevant here. It looks like the 1.1 tilesets have been created with Cesium ion. But which tool has been used for creating the 1.0 tileset?

The 1.0 and 1.1 ones are structurally very different, so it’s hard to make any comparisons here.

For example, the 1.0 one defines a geometric error of 2330 for the tileset, and then geometric errors of (4 / pow(2, level)) for the other tiles. (I.e. 4.0, 2.0, 1.0, 0.5, …).

The 1.1 one defines a geometric error of 1529 for the tileset, and then 44 for the root tile of an implicit tileset (i.e. the errors will be 44, 22, 10, 5…)

In theory, one way to ā€œalignā€ both could be to use different maximumScreenSpaceError values for these tilesets. (It should be a factor of 10, assuming that it’s linear…). But this is only one of the differences.

Other aspects are impossible to sensibly compare. For example, I looked at one texture in the 1.0 data set, and it had a size of 1024x1024. A texture in the 1.1 data set had a size of 1536x1546 (!). But these might have been at different levels of detail (It would take more time to systematically analyze this)


One important detail: When I’m looking at the ā€œnot ktx2 compressedā€ tileset, the viewer eventually prints

The tiles needed to meet maximumScreenSpaceError would use more memory than allocated for this tileset.
The tileset will be rendered with a larger screen space error (see memoryAdjustedScreenSpaceError).
Consider using larger values for cacheBytes and maximumCacheOverflowBytes.

to the console. This basically means that it tries to load to much data (too many textures) to fit into memory. And it will then fall back to using a lower level of detail.

So in theory, it might be that this lower level of detail then uses lower-resolution textures, which may (visually) appear ā€œmore washed-outā€, which could (depending on your perspective and expectation) also be called ā€œless noisyā€ā€¦


Maybe all that was too ā€œlow-levelā€. At least, I hope it’s not distracting…

Hi there,

Thanks for the quick answer !

When having a static camera and changing the tilesets, the degradation is indeed not visible. The visual ā€œvibrationsā€ appear when you move the camera (for example by zooming in / out). It feels like the textures the engine is loading are too high compared to the geometric LOD loaded.

With 1.1 tilesets, we see this little noise on the grass and the trees, that we didn’t have with the 1.0 tilesets.

The 1.1 tilesets were produced using the Cesium Ion. The 1.0 tilesets were produced with iTwin capture modeler.

We set the maximumScreenSpaceError to 1.5 for the 1.0 tilesets because we couldn’t load the highest LOD without getting really close to the tileset.
Now that we process the tilesets in 1.1, we set the MSSE to 16 by default, because 1.5 is way too high like you mentioned. But even with a MSSE at 16, we still get this noise / flickering effect. Setting the MSSE higher than 16 doesn’t give a good visual result.

We took a screen video without any compression to get an idea of what we experience. You can download the video here

I see the general point of that ā€œnoiseā€. And avoiding that noise is probably the primary goal for you. Hopefully, someone from the CesiumJS core team can give further hints about possible ways of how to achieve that.


The following part may not be relevant for you.

But I think that in order to resolve this, it might be helpful to have a closer look at some aspects of the data. So I’ll drop some notes/observations here, hoping that it is considered to be (at least interesting, and maybe even) relevant for those who investigate this further.

I had a look at one tile in both the ā€œoldā€ (1.0) and the ā€œnewā€ (KTX-compressed, 1.1) tileset. And I tried to find a tile with similar size and covered area in both tilesets. These tiles, at the time of writing this, are

I extracted the GLB from the B3DM, and had a look at both of them, in https://gltf.report/ , and moved them slightly - I hope the GIF captures that:

Cesium Forum 38634 Texture quality single tile

The main point here is: The KTX-compressed one also shows the noise. (So this is not a ~ā€œCesiumJS rendering issueā€ in the most narrow sense, but somehow ~ā€œinherentā€ to the data)

I then had a closer look at the textures themself.

The old one uses a JPEG image with 512x512 pixels.
The new one uses a KTX2 texture with 1736x1732 pixels.

Two points that might be relevant here:

  • The new one is much larger, obviously…
  • The new one has a size that is not a power of two. (So maybe there’s some upscaling going on somewhere in the rendering engines…? Not sure about that…)

Zooming into the same area of these textures, just for comparison:

The shallow observation that I could make now: Maybe there is more noise in the grass because… there just is more noise in the grass? (This refers to the possibility that I mentioned earlier, namely that the old, small, JPEG-compressed texture may just be so ā€œwashed outā€ and ā€œblurredā€ that there is no noise that could be visible…)

Even when just looking at that single KTX texture and zooming in and out a bit, the noise is clearly visible:

Cesium Forum 38634 Texture quality single texture

Finally, when looking at the JPEG texture (zoomed out pretty far), one can see that there also is some noise, and that is more prominent when the filter mode is GL_NEAREST (right) than for GL_LINEAR (left):

Cesium Forum 38634 Texture quality single jpg filter

That might appear to be a tangent on the one hand (because I’m pretty sure that GL_LINEAR is used everywhere). But an underlying point is that some of the noise could be explained by some issues with mipmapping. (Roughly: When the JPG textures are mipmapped but the KTX textures are not, or when they are using different GL_TEXTURE_LOD_BIAS values for some reason…)


EDIT, literally as a bottom line: It’s not unlikely that the reason for that noise is what you mentioned, namely that

… the textures the engine is loading are too high compared to the geometric LOD loaded.

Thank you @Marco13 for your thorough analysis of our issue, we appreciate a lot.

I’d like to clarify a few points: We’re using identical input data for both version 1.0 and 1.1 of the tiler, and we observe the same issue regardless of whether KTX compression is enabled in the 1.1 pipeline.

Based on your examples, it seems we may have identified the root cause: the new reality tiler appears to be applying unnecessarily high resolutions to tiles that don’t require such detail. Into your exemple, when loading a Level 20 tile, we’re receiving texture quality equivalent to Level 23 (raw data), which is higher than needed for proper display. Would you agree with this assessment of the tiler’s behavior?

I’m hoping that someone can confirm these assumptions. There are some unknowns and things that may have to be examined more thoroughly. (For example: I think that the built-in mipmapping of OpenGL/WebGL is not applicable for compressed textures, which would explain that the effect is only visible (or at least much stronger) for KTX. But if it is also happening for non-KTX-textures, then there may be an additional issue. Usually, mipmapping is exactly aiming at preventing that sort of aliasing/noise). But from my current understanding, and the symptoms so far, it indeed looks like the texture resolution may be too high for the geometry in the size that it is displayed with, yes. (Some details are still examined internally - I’ll try to post any updates if there is a conclusion about that)

Thank you @Marco13 for investigating our issue. Please don’t hesitate to reach out if you need any additional information or a full dataset to explore this in more detail.

We remain very enthusiastic about the 3D Tiles 1.1 format and deeply appreciate the work on this format update. The performance gains and bandwidth savings we’re seeing are impressive and exactly what we need for our production environment. However, we’re currently blocked from deploying these assets due to the visual artifacts we’re experiencing.

Hello @Marco13,
Have you made any progress in your investigations regarding the rendering issues with 1.1 tilesets? This situation has become very problematic for us because with the deprecation of version 1.0 on the Reality Tiler, we’re forced to use version 1.1 despite the significant decrease in visual quality.

All our clients are reporting this ā€œgrainy flickering effectā€ on the new assets we’re putting into production. We appreciate the performance gains and bandwidth savings offered by the 1.1 format, but these advantages are overshadowed by the visual artifacts that currently prevent us from deploying these assets.

Could you at least restore version 1.0 on the Reality Tiler? This would provide us with a workable solution in the meantime. This temporary rollback would be extremely helpful for our ongoing projects and would give your team the time needed to properly address the mipmapping issues with compressed textures in the 1.1 implementation.

Thanks a lot

I had another closer look at this today (triggered by your post) :slightly_smiling_face:

First, a short note:

The old tiler has been replaced by the new one, and further development efforts (like performance- and quality improvements) will only go into the new one.


About the noisy textures:

A very high-level summary is that the problem is indeed caused by a combination of several things: The textures that are stored in the glTF/GLB files are compressed with KTX. They do not include MipMaps. And for compressed textures, the client cannot automatically generate MipMaps.

There are different options for how this could be alleviated in the future. No specific decision has been made. But they are roughly in the two categories of

  1. either generating MipMaps during the tiling process and store them as part of the KTX
  2. performing some form of generic texture filtering on the client side (i.e. CesiumJS in this case)

There are some obvious trade-offs involved, in terms of file size and performance. So there is no decision yet, and no exact timeline.

Yes, that does not immediately help you. But … there might be a workaround. It might be a bit clumsy, but I’d like to at least mention that possibility: It is possible to post-process a tileset in various forms. Specifically, this includes the use of non-compressed textures, and the insertion of the MipMapping-Flag that should cause the noise to disappear.

So that workaround in your case would be to create the tileset with the ā€œKTXā€ flag being disabled (meaning that it will contain PNG textures), and then post-processing the tile content with a custom snippet that is based on the 3d-tiles-tools. The necessary steps would be

  • Clone the current state of the 3D Tiles Tools (i.e. the current main state, which is this state at the time of writing this)
    git clone https://github.com/CesiumGS/3d-tiles-tools.git
    cd 3d-tiles-tools
  • Install the dependencies
    npm install
  • Add the code that is shown below as a file called ModifyTextures.ts in the project root directory
  • Adjust the tilesetSourceName and tilesetTargetName in that code, for the path where the tileset is stored and where the result should be written…
  • Run that snippet
    npx ts-node ModifyTextures.ts

The snippet is shown here:

//
// NOTE: None of the functionality that is shown here is part of the
// public API of the 3D Tiles tools. The functions that are shown here
// use an INTERNAL API that may change at any point in time.
//
import { TileContentProcessing } from "./src/tools/tilesetProcessing/TileContentProcessing";
import { GltfTransform } from "./src/tools/contentProcessing/GltfTransform";
import { ContentDataTypes } from "./src/base/contentTypes/ContentDataTypes";
import { textureCompress } from "@gltf-transform/functions";

// The source and target for the conversion operation
const tilesetSourceName = "./data/input/tileset.json";
const tilesetTargetName = "./data/output/tileset.json";

// WARNING: The target will be overwritten without notice!
const overwrite = true;

// Read a glTF-Transform document from the given input GLB buffer,
// modify the materials, and return a new buffer that was created 
// from the modified document
async function modifyMaterialsInGlb(inputGlb: Buffer): Promise<Buffer> {
  const io = await GltfTransform.getIO();
  const document = await io.readBinary(inputGlb);
  const root = document.getRoot();

  // Step 1: Set the minification filter of all materials to
  // GL_LINEAR_MIPMAP_NEAREST , which causes automatic generation
  // of MipMaps in the client
  const materials = root.listMaterials();
  for (const material of materials) {
    const baseColorTextureInfo = material.getBaseColorTextureInfo();
    if (baseColorTextureInfo) {
      const GL_LINEAR_MIPMAP_NEAREST = 9985;
      baseColorTextureInfo.setMinFilter(GL_LINEAR_MIPMAP_NEAREST);
    }
  }

  // Step 2 (Optional): Compress the textures to JPEG
  await document.transform(
    textureCompress({ targetFormat: "jpeg", quality: 70 })
  );

  const outputGlb = await io.writeBinary(document);
  return Buffer.from(outputGlb);
}

async function runConversion() {

  // Create a `TileContentProcessor` that calls modifyMaterialsInGlb
  // for all GLB files
  const tileContentProcessor = async (
    content: Buffer,
    type: string | undefined
  ) => {
    if (type !== ContentDataTypes.CONTENT_TYPE_GLB) {
      return content;
    }
    // A pragmatic try-catch block for the actual modification.
    // In a real application, different error handling could
    // be used.
    console.log("Modifying materials...");
    try {
      const modifiedContent = await modifyMaterialsInGlb(content);
      console.log("Modifying materials... DONE");
      return modifiedContent;
    } catch (e) {
      console.log(`ERROR: ${e}`);
      return content;
    }
  };
  // Process the tileset source, and write it to the tileset target,
  // applying the `TileContentProcessor` to all tile contents
  await TileContentProcessing.process(
    tilesetSourceName,
    tilesetTargetName,
    overwrite,
    tileContentProcessor
  );
}

runConversion();

This will read the tileset JSON and process all GLB files, to set the minification filter for all textures to GL_LINEAR_MIPMAP_NEAREST, and (optionally) compress the textures as JPEG.

(Note: The file size and memory consumption of the tileset will be larger than for the one that uses KTX. It could be possible to extend the functionality of the 3D Tiles Tools to allow the creation of mipmapped KTX files, but this is not implemented yet. Converting the textures to JPEG resembles what has been done in the old tiler, with the option to adjust that quality: 70 accordingly)

For testing, I did apply this process to a tileset that contains a single tile from the tileset that you provided. The left one is the original one, and the right one is the post-processed one:

Cesium Forum 38648 Workaround


Of course, it would be preferable to not have this sort of manual post-processing. But some details about the handling of MipMaps in KTX still have to be investigated.

Hello @Marco13

Since your last update in April, we’re wondering if there have been any developments on implementing native mipmap generation either:

  1. In the Reality Tiler during the tiling process (storing mipmaps as part of KTX2), or

  2. In CesiumJS through client-side texture filtering

We understand the trade-offs involved in terms of file size and performance. From our perspective, we’d prefer option 1 (mipmapped KTX2 files) even with the increased file size, as it would eliminate the need for post-processing and maintain the performance benefits of GPU-native texture compression. We’re willing to accept larger tileset sizes if it resolves the visual quality issues.

Is there any timeline or roadmap you can share regarding these improvements? In the meantime, are there any plans to optimize the 3d-tiles-tools to make the post-processing workflow more efficient for batch operations on large tilesets?

Thanks again for your continued support on this matter.

I brought this up as an internal issue. There has been some discussion. I’ll ā€œbumpā€ it and post possible updates here.

In the meantime, trying to address some of the points that you mentioned:

I (subjectively) think that proper mipmapping in the KTX files is the way to go. Others might disagree, but my reasoning is pretty simple: It’s a ā€œbuilt-inā€ functionality, with close-to-zero implementation effort. And no matter how smart some sophisticated post-processing is and how well its parameters are tweaked anywhere, I claim that it will never be as good as the built-in, hardware-supported (!) mechanisms of the GPU itself, for figuring out: ā€œHey, I’m writing 50 ā€˜image pixels’ into 1 ā€˜screen pixel’ - maybe I should turn it down a notchā€¦ā€ (i.e. use a different MIP level).

(The file size difference that may be caused by MipMaps is something to keep in mind. But … when I did run the test with that single tile from your data set, I noticed that the size of the old GLB was 720KB, and the size of the GLB with the mipmapped texture was 563KB. Yes, the texture itself was a few bytes larger, but the overall GLB became smaller. Apparently, the tiler generated data that wasted some space with other things. I didn’t investigate in detail what exactly that was…)

In the meantime, are there any plans to optimize the 3d-tiles-tools to make the post-processing workflow more efficient for batch operations on large tilesets?

If this question is solely about the time that it takes to process the data set: There is not sooo much that can be done from the perspective of the 3d-tiles-tools. You may have seen (in the snippet that I posted) that the core of the operation there is just a small ā€œread-modify-writeā€ block that just calls external libraries to do the main work. One can look at these libraries or alternatives to see whether anything stands out or can be done more efficiently, but I wouldn’t expect some ~ā€œorders-of-magnitude speedupā€ there.

(An aside: I think that creating a tileset is often not ā€œtime-criticalā€. Let it run over night, or for two days. When you have good tileset afterwards, that can be delivered to hundreds of clients efficiently and with high quality, then this could/should often be worth the effort of one day of pre-processing)

If the question was about the API or ā€œusabilityā€ of the tools: Note the big disclaimer in the snippet: Nothing of that is ā€œofficially supportedā€ in terms of a stable API, and even less as a ā€œcommand-line functionalityā€. I try to keep the API stable, but not everything is under my control here. There’s also a reason why I opened Serialization format for transforms Ā· Issue #1391 Ā· donmccurdy/glTF-Transform Ā· GitHub , but that may be a bit of a tangent for now…)

I’d agree that mipmapping the KTX2 textures would probably be the best choice here, preferably at a resolution closer to expected on-screen size. I don’t think there’s much the CesiumJS runtime can do otherwise, given an already-compressed KTX2 texture.

The glTF Transform library does have utilities for compressing and optionally mipmapping KTX2 textures, assuming a dependency on the @gltf-transform/cli package and the KTX-Software CLI is OK, connected into 3d-tiles-tools… I see 3d-tiles-tools has a different implementation of KTX2 compression, though, were there other restrictions here?

1 Like

I see 3d-tiles-tools has a different implementation of KTX2 compression, though, were there other restrictions here?

The main intention there was to avoid the dependency to the KTX-Software CLI.

The KTX-Software is … a bit chunky … and I thought that it would be nice to have that self-contained, standalone, pure TS (with WASM) solution, readily available after an npm install.

(Also, this was done when the plan still was to even have that as some standalone @3d-tiles-tools/ktx package - everything is prepared for that, but it is not deployed like that…).

The JPEG-conversion that is currently done in that snippet could trivially be changed to something like

const etc1sOptions: KtxEtc1sOptions = {
  compressionLevel: 2,
  qualityLevel: 96,
};
const transformToKtx = GltfTransformTextures.createTransformTexturesToKtx(
  etc1sOptions,
  {}
);
await document.transform(transformToKtx);

But

  1. I think that the BinomialLLC encoder does not support MipMapping (or if it does, the function is not routed through). (Does the glTF-Transform CLI support that?)
  2. KTX texture generation can be slow.

The speed strongly depends on the parametrization, but for some parameters, it’s really, really slow: Running an ETC1S/Compression 5/Quality 255 for a 1024x1024 image takes >6 seconds (!) on my machine…

The glTF Transform CLI uses the KTX Software CLI, and so it does support mipmapping, enabled by default. The option to disable mipmaps in glTF Transform was only recently requested and added, by a user doing 2D-only graphics, so I haven’t heard that the file size difference has been an obstacle.

But, yeah, the reason KTX2 compression exists only in @gltf-transform/cli (Node.js) and not @gltf-transform/functions (Web, Deno, and Node.js) is that I only wanted to support one encoder, and chose the KTX Software CLI for the performance. It is intended to be ā€˜easy’ to write a custom transform step, though, and I know at least a few people have plugged in WASM KTX2 encoders that way.

If performance of a stable/maintained WASM build were within ~1.5x of native, and basic features like mipmaps were supported, I would switch over to WASM. Whether the same tradeoffs make sense for 3d-tiles-tools, I’m not sure!

There certainly are different options that could be considered here. A pragmatic one could be to try the KTX CLI, and use the BinomialLLC/WASM-based one as a fallback if KTX is not available (maybe with some CLI option to force one or the other). But of course, that would restrict the available options to the intersection of the options offered by both - so it would not include MipMapping.

In general, it would be nice to have a pure JS/WASM version of the KTX Software.

I found it surprisingly hard to get a clear idea of what is actually available there. There are the wrappers for ā€œlibktxā€, but I think that ā€œlibktxā€ itself is no longer maintained. And there are bindings to some transcoder, where the documentation at KTX Javascript Wrappers Reference: Basis Universal Image Transcoder binding says

Warning
Deprecated. Use the container independent transcoder from the Binomial LLC repo instead:

Whether or not libktx includes a ā€œtranscoderā€ isn’t entirely clear to me either. I think that it doesn’t - it (only) contains an ā€œencoderā€(?) (There are several layers of very domain-specific vocabulary for what people would like to accomplish with magic.save(rgbaPixels, "file.ktx");…)

In addition to the bindings, there should/could be a layer that simplifies all that. The KtxUtility was my first stab at that for creating KTX files. And now I wanted to point to a function dealing with mipmapping in JS to illustrate some of the further complexities of reading/using KTX that should be hidden behind some convenience layer, but then … I scrolled all the way to the top, so… yeah…

1 Like

I think that the BinomialLLC encoder does not support MipMapping (or if it does, the function is not routed through)…

Are we sure this is not supported? The option appears to be hard-coded to false in KTXUtility.ts.

The ktx2-encoder package claims to support mipmapping, also based on the BinomialLLC encoder compiled to WASM, and optionally integrates with glTF Transform. I haven’t tested ktx2-encoder yet. There are a few possible improvements that look promising — use Web Workers, configure certain encoding settings based on the glTF material texture slot… — if the basic functionality works well enough. The default encoding settings are probably reasonable enough for albedo textures in 3D Tiles already, though.

I thought that this was not supported, but that was wrong.

So this is something that probably could be routed through the KtxUtility by just passing on that boolean via the KtxOptions. But it opens the ā€œcan of wormsā€ of 6 (six) other mip-related options.

(More generally, found the number and technical depth of some of the options a bit overwhelming - so … when exactly should I set the RDOUASTCMaxAllowedRMSIncreaseRatio to 42? :grimacing: )

EDIT: I didn’t have the ktx2-encoder on the radar. Looks interesting (including the online tool). Maybe that could eventually even replace the KtxUtility.

(Could be another EDIT, but to ā€˜bump’): @donmccurdy Is the only reason why you have not used things like the ktx2-encoder in glTF-Transform that of performance? I haven’t run dedicated and explicit performance comparisons yet, but as mentioned above, I could imagine the ā€œ(potentially) slow pathā€ to be a fallback.

At this point mainly because ktx2-encoder is new — I haven’t found the time to evaluate it. From prior discussions, I’m not sure how WASM single-threaded performance will compare to KTX Software single-threaded performance (which I’d want to check before bothering to add workers). And it might need some custom packaging, dealing with WASM in npm dependencies can be tricky. I’m not keen to add it only as a fallback in glTF Transform, though — it would need to replace KTX Software. WASM dependencies cost 100x the maintenance burden of other dependencies, having two dependencies for the same purpose is too much for me.

The outputs of listTextureInfo(texture), listTextureChannels(texture), and getTextureColorSpace(texture) should contain enough information to choose the necessary mipmapping options. But, yeah, texture compression options tend to feel a bit overwhelming for me too. :slight_smile: