Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Antialiasing has artifacts when logarithmic depth buffer is enabled #22017

Closed
asbjornlystrup opened this issue Jun 18, 2021 · 23 comments
Closed

Comments

@asbjornlystrup
Copy link

When the logarithmic depth buffer is enabled, antialiasing fails in areas where geometry intersects or lines up. See screenshot below. It feels like this would be a common issue, but I couldn't find any other post on it.

With logarithmic depth buffer enabled. Notice the jagged, aliased lines.
image

With logarithmic depth buffer disabled.
image

I've encountered this on desktop computers. I've seen it on both Linux Ubuntu and Fedora, in the Chrome browser. I don't remember how it looks in Windows, but it seems a bit strange if it would be different there?

Right now we're at version 0.129.0 of threejs, but it's always been like this since we started using logarithmic depth (1-2 years ago), and it happens with all geometry.

Is it like this for everyone else? Any idea how to fix it, or is it something we just have to live with?

Thanks.

@Mugen87
Copy link
Collaborator

Mugen87 commented Jun 18, 2021

Do you mind sharing a live link that demonstrates the aliasing?

@Mugen87
Copy link
Collaborator

Mugen87 commented Jun 21, 2021

I have tested webgl_camera_logarithmicdepthbuffer on three systems (Android, macOS and Windows) with latest Chrome and I don't see the reported aliasing.

I've seen it on both Linux Ubuntu and Fedora, in the Chrome browser. I don't remember how it looks in Windows, but it seems a bit strange if it would be different there?

Um, could be a GPU driver related issue. In this case, you might want to try to update the driver or switch to one from a different vendor.

@mrdoob
Copy link
Owner

mrdoob commented Jun 21, 2021

Indeed. Sounds like a Linux driver issue indeed.
Unfortunately there's not much we can do about this.

@mrdoob mrdoob closed this as completed Jun 21, 2021
@asbjornlystrup
Copy link
Author

asbjornlystrup commented Jun 21, 2021

The logarithmic depth buffer example doesn't have any intersecting geometry (or geometries in contact with each other), so it doesn't show the issue. That example works fine on my end as well. I made a JSFiddle and tested it on Windows now, and the same bug is present there as on Linux. (EDIT: See post below for live link.)

You can see the left intersection from blue to teal has no antialiasing:
image

If you go into the fiddle's code and disable the logarithmicDepthBuffer by changing logarithmicDepthBuffer: true to logarithmicDepthBuffer: false, the antialiasing works as expected:
image

@WestLangley
Copy link
Collaborator

@asbjornlystrup Please update your example to the current three.js revision.

//

Hmmm... I'm seeing the same issue on my M1 iMac.

@Mugen87 Disabling EXT_frag_depth in WebGL1 or WebGL2 forces three.js to emulate logarithmic depth buffer in software, and the problem disappears. Can you confirm?

// hack WebGLProgram.js
rendererExtensionFragDepth: false, // isWebGL2, || extensions.has( 'EXT_frag_depth' ),
@asbjornlystrup
Copy link
Author

Please update your example to the current three.js revision.

Here's the fiddle with v128 (I didn't manage to find an online source for v129): https://jsfiddle.net/kdLxsf9u/4/

@Mugen87
Copy link
Collaborator

Mugen87 commented Jun 22, 2021

Okay, with the new fiddle I see the problem on Windows and macOS.

@Mugen87 Disabling EXT_frag_depth in WebGL1 or WebGL2 forces three.js to emulate logarithmic depth buffer in software, and the problem disappears. Can you confirm?

Yes.

@mrdoob
Copy link
Owner

mrdoob commented Jun 22, 2021

Would it be a bad idea to stop relying on EXT_frag_depth for this feature and emulate instead?

@Mugen87
Copy link
Collaborator

Mugen87 commented Jun 22, 2021

I guess I would first report the issue to the Chromium team. Maybe it's possible to fix this in the browser implementation/ANGLE.

@mrdoob
Copy link
Owner

mrdoob commented Jun 23, 2021

I guess I would first report the issue to the Chromium team. Maybe it's possible to fix this in the browser implementation/ANGLE.

I can reproduce on Firefox and Safari too though.

Also, here's a "easier to see" version of the fiddle: https://jsfiddle.net/gs231L0u/

@Mugen87
Copy link
Collaborator

Mugen87 commented Jun 23, 2021

Maybe related (source):

I haven't come that far with my code yet, but I presume that assigning to gl_FragDepth will break the logic of MSAA, where the fragment shader is executed once per pixel of the rasterized element. It means that in this case it outputs the same depth value for all MSAA pixel samples, which is not what we want - it disables the antialiasing.

@mrdoob
Copy link
Owner

mrdoob commented Jun 24, 2021

That makes me more confident that we should just use the emulated logarithmic depth buffer code.

@jbaicoianu
Copy link
Contributor

Just catching up on the thread now - it's been several years since I actually used the logarithmic depth buffer code myself - I stopped using it because, while it does help in specific scenes - like the example scene, which as pointed out doesn't have any intersecting objects - the way it interacts with the depth buffer means it introduces a lot of edge cases. The way it trades off floating point accuracy means the z-buffer is more evenly spread across the whole range, but at the expense of losing accuracy for objects that are close together.

I'm not specifically familiar with how the gl_FragDepth extension interferes with MSAA, and whether they might be fixed by changing the ordering of the shader chunks, but besides that issue there are some trade-offs both with and without the extension.

Without the extension, there are issues where if the interpolated depth is applied over a large enough (in screen-space) triangle, the inaccuracies introduced by that can cause z-fighting that's even worse than what you were trying to avoid in the first place, and the suggestion from the original article I based this implementation on is to dynamically tesselate your objects to prevent this situation from occurring.

With the extension, you get per-pixel depth values which mostly resolves the issue with large triangles, but at the cost of disabling some z-test optimizations, and introduces these weird interactions with other parts of the fragment shader which might try to use the z-buffer value later on.

So I'm not sure if I really have a strong recommendation either way, since both approaches have trade-offs depending on the type of scene you're working with.

@Mugen87
Copy link
Collaborator

Mugen87 commented Jun 25, 2021

Considering this feedback, I vote to keep both approaches in the engine since it is just more flexible. Depending on the use case, the extension can be preferable over the emulation and vice versa.

@WestLangley
Copy link
Collaborator

from the original article

For the record, we implemented a later version described in
http://outerra.blogspot.com/2013/07/logarithmic-depth-buffer-optimizations.html

@WestLangley
Copy link
Collaborator

I vote to keep both approaches in the engine since it is just more flexible.

As implemented, the user has no control over which technique is used.

If you want flexibility, you will have to change the API.

@Mugen87
Copy link
Collaborator

Mugen87 commented Jun 25, 2021

I forgot that #22017 (comment) was a modification in the renderer. In this case, a parameter that controls the process might be a choice. E.g. logarithmicDepthBufferForceEmulation (default false).

@mrdoob
Copy link
Owner

mrdoob commented Jun 28, 2021

@jbaicoianu

Just catching up on the thread now - it's been several years since I actually used the logarithmic depth buffer code myself

What's your approach now?

@WestLangley
Copy link
Collaborator

And then there is this... Reversed Depth Buffer

@jbaicoianu
Copy link
Contributor

@mrdoob

What's your approach now?

Well, mostly I'm no longer working on scenes with the type of scale that needs logarithmic depth buffer - when I started using Three.js I was working on a space sim engine with 1:1 planets with seamless transitions from space to ground, so supporting a wide range of Z values was important, but since then my engine has evolved to be used more for human-scale worlds, and content creators can specify their own static near/far values based on their specific scenes.

I also experimented for a while with adaptive near/far values, where each frame, I'd look at all the objects in the view frustum with an infinite far plane, then sum up their distance + bounding sphere radii to calculate new near/far values on the fly to neatly frame the scene based on its contents - this approach worked well enough, but again, since I kinda shifted focus it ended up being complexity that wasn't really necessary for the most common use cases, so I stopped using it as well.

@Popov72
Copy link

Popov72 commented Dec 20, 2021

And then there is this... Reversed Depth Buffer

Note that reverse depth buffer does not work in WebGL because of the [-1..1] range of Z in NDC. See https://developer.nvidia.com/content/depth-precision-visualized:

OpenGL by default assumes a [-1, 1] post-projection depth range. This doesn't make a difference for integer formats, but 
with floating-point, all the precision is stuck uselessly in the middle. (The value gets mapped into [0, 1] for storage
 in the depth buffer later, but that doesn't help, since the initial mapping to [-1, 1] has already destroyed all the 
precision in the far half of the range.) And by symmetry, the reversed-Z trick will not do anything here.

However, it can be used in WebGPU which has a [0..1] range. For eg: https://playground.babylonjs.com/#67JFXI#15 => in WebGPU, you don't have z-fighting between the two green and gray spheres (at least at the starting position - if you move the camera you will see some artifacts. This PG is only used for validation tests internally, it is certainly not advisable to have two objects separated only by 0.00015 units!).

@Mugen87
Copy link
Collaborator

Mugen87 commented Mar 1, 2024

After removing WebGL 1 support from the renderer, there is only one code path left which always writes to gl_FragDepth.

I would not avoid the usage of gl_FragDepth just because MSAA has issues with it. Similar like with HDR, MSAA is not necessarily the best choice. Probably better to use a post-processing AA technique like FXAA/SMAA or the more high-quality SSAA/TAA if possible.

@Mugen87 Mugen87 closed this as completed Mar 1, 2024
@Mugen87 Mugen87 added this to the r163 milestone Mar 1, 2024
@mrdoob mrdoob removed this from the r163 milestone Mar 1, 2024
@CodyJasonBennett
Copy link
Contributor

CodyJasonBennett commented Mar 18, 2024

I would not avoid the usage of gl_FragDepth just because MSAA has issues with it. Similar like with HDR, MSAA is not necessarily the best choice. Probably better to use a post-processing AA technique like FXAA/SMAA or the more high-quality SSAA/TAA if possible.

FWIW and future readers, setting that variable opts out of early-z optimizations which is critical for mobile/TBDR GPUs and overdraw. This was acknowledged in #22017 (comment). I'll open a new issue if I find anything actionable on that end. I notice the older code paths were removed with WebGL 1, so I had to check here if anything improved or if there were any experiments with using the old code path on its own or combined with use of gl_FragDepth (so MSAA has correct triangle coverage, although usage of gl_FragDepth will deopt). That's at least where I'm starting since Firefox will not implement EXT_clip_control and IDK if I could either before WebGPU.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
7 participants