Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow "auto" layout args for the create_compute_pipeline #423

Merged
merged 19 commits into from
Nov 23, 2023

Conversation

dskkato
Copy link
Contributor

@dskkato dskkato commented Nov 18, 2023

I attempted the implementation to allow "auto" for the layout in create_compute_pipeline, but to be honest, I'm not entirely clear on how the behavior of wgpuComputePipelineGetBindGroupLayout is defined for the pipeline created with ffi.NULL as the layout args. As implemented in this PR, it seems to work correctly only when the group_id is 0, and otherwise, it crashes.

However, I can confirm that applying this patch enables the same syntax as the wgpu compute shader example as below.

https://github.com/gfx-rs/wgpu/blob/fd53ea90e675d94f1d79a1c3c44b2f356cecd9c5/examples/hello-compute/src/main.rs#L106-L123

I'm not sure if this patch can be merged, but I'm documenting the necessary steps here as a memo for the work required to accept "auto." If it seems like progress can be made, I'll proceed with the necessary tasks.

Copy link
Collaborator

@almarklein almarklein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! This looks good, and you also added a test 👍 I only have a small comment to cover the else clause in the triage on layout.

As implemented in this PR, it seems to work correctly only when the group_id is 0, and otherwise, it crashes.

I'll have a look too.

wgpu/backends/wgpu_native/_api.py Outdated Show resolved Hide resolved
@almarklein
Copy link
Collaborator

As implemented in this PR, it seems to work correctly only when the group_id is 0, and otherwise, it crashes.

I'll have a look too.

It nearly works, but there is no way (that I know of) to know how many bind groups there are.

@rajveermalviya this PR implements auto-layouts, it uses wgpuComputePipelineGetBindGroupLayout() to get the automatically generated layout objects, but how should we know how many there are? I'm not seeing a way to get this from the shader module object. Does webgpu.h perhaps need something like wgpuComputePipelineGetBindGroupLayoutCount()?

dskkato and others added 6 commits November 19, 2023 16:38
Co-authored-by: Almar Klein <almar@almarklein.org>
- always call the underlying API, instead of holding handles
- it is now quite unsafe since there is no way to raise exception since
wgpu-native aborts immediatly if the given index is out of range.
@dskkato
Copy link
Contributor Author

dskkato commented Nov 20, 2023

I've added two commits on my end.

The first one is related to the implementation of get_bind_group_layout. The previous implementation assumed that the layout is provided when the pipeline is constructed, but in the 'auto' layout mode, access is only possible after pipeline construction. Therefore, I modified get_bind_group_layout to call wgpuComputePipelineGetBindGroupLayout every time. This results in the instantiation of a new GPUBindGroupLayout instance each time, but since it aligns with the WebGPU spec, I believe this implementation is acceptable.

One concern is the current implementation of wgpuComputePipelineGetBindGroupLayout in wgpu-native, which immediately aborts if an out-of-range index is accessed. It would be beneficial if we could either retrieve the number of bind group layouts (like wgpuComputePipelineGetBindGroupLayoutCount), or handle errors at the caller side.

https://www.w3.org/TR/webgpu/#dom-gpupipelinebase-getbindgrouplayout

The second commit removes the unused _bindings similar to _layouts. Since this deletion could be done without modifying the tests, I think it should be fine. The second change seemed to have disrupted the tests_mem, so I reverted it.

Let me know if you have any comments or concerns.

@dskkato dskkato marked this pull request as ready for review November 20, 2023 15:26
@dskkato dskkato requested a review from Korijn as a code owner November 20, 2023 15:26
Copy link
Collaborator

@almarklein almarklein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this behavior is acceptable. Part of me would like that the same object is returned on multiple calls to get_bind_group_layout() but that would complicate the code, and it indeed looks like the WebGPU spec does not require it. So let's go with this approach. For extra context, also see this comment:

# Note: wgpu-core re-uses BindGroupLayouts with the same (or similar
# enough) descriptor. You would think that this means that the id is
# the same when you call wgpuDeviceCreateBindGroupLayout with the same
# input, but it's not. So we cannot let wgpu-native/core decide when
# to re-use a BindGroupLayout. I don't feel confident checking here
# whether a BindGroupLayout can be re-used, so we simply don't. Higher
# level code can sometimes make this decision because it knows the app
# logic.

tests/test_wgpu_native_compute_tex.py Show resolved Hide resolved
wgpu/backends/wgpu_native/_api.py Outdated Show resolved Hide resolved
@rajveermalviya
Copy link

As implemented in this PR, it seems to work correctly only when the group_id is 0, and otherwise, it crashes.

I'll have a look too.

It nearly works, but there is no way (that I know of) to know how many bind groups there are.

@rajveermalviya this PR implements auto-layouts, it uses wgpuComputePipelineGetBindGroupLayout() to get the automatically generated layout objects, but how should we know how many there are? I'm not seeing a way to get this from the shader module object. Does webgpu.h perhaps need something like wgpuComputePipelineGetBindGroupLayoutCount()?

IMO this seems like a good feature request for the JS Spec, getBindGroupLayout was proposed here - gpuweb/gpuweb#446

One concern is the current implementation of wgpuComputePipelineGetBindGroupLayout in wgpu-native, which immediately aborts if an out-of-range index is accessed. It would be beneficial if we could either retrieve the number of bind group layouts (like wgpuComputePipelineGetBindGroupLayoutCount), or handle errors at the caller side.

Please file an issue about this over at gfx-rs/wgpu-native

@almarklein
Copy link
Collaborator

@rajveermalviya thanks for the response.

With the current solution I don't think need wgpuComputePipelineGetBindGroupLayoutCount(), because the application-code is (probably) aware of the number of bind groups. So what remains is failing more gracefully when an invalid index is given by accident, which can indeed be solved in the implementation of wgpuComputePipelineGetBindGroupLayout and wgpuRenderPipelineGetBindGroupLayout in wgpu-native.

almarklein
almarklein previously approved these changes Nov 22, 2023
Copy link
Collaborator

@almarklein almarklein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

For the moment the panic-on-invalid-index is unfortunate, but not that much of an issue to hold off this PR.

In the mean time, I created a PR to let wgpu-native produce more friendly errors: gfx-rs/wgpu-native#320

@Korijn Korijn merged commit 25cf7dc into pygfx:main Nov 23, 2023
19 checks passed
@dskkato dskkato deleted the allow_auto_for_layout_args branch November 23, 2023 22:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
4 participants