-
Notifications
You must be signed in to change notification settings - Fork 817
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implementing UINT64_C in GLSL #2284
Comments
It might be a specification question. ARB_gpu_shader_int64 says:
The core spec. uses inline notation to say:
But, the normative grammar at the end just expects a complete numeric token, not multiple tokens forming a single number, implying the above is how to form a token, and the core spec is only describing post-preprocessing tokens, not preprocessing tokens. That is, GLSL grammar must receive an already fully formed numeric token, there is no suffix parsing. I think that means it's bug (somewhere) to be error checking value overflow (from 32 to 64) while preprocessing. However, even if fixing that, token pasting I think the missing pieces could be added for this case, as the number's bit pattern is already correct. But, in the end, I think the problem will be the specifications themselves, as this requires either
|
This would be more consistent with what C/C++ specify? |
Yes. |
I'd like to reuse a C header file in GLSL that makes use of the UINT64_C() macro. To make this work in GLSL, I'm enabling the GL_ARB_gpu_shader_int64 extension and defining UINT64_C() as follows:
#define UINT64_C(x) x##ul
Unfortunately, when the macro argument is a 64-bit value, the compiler generates the "hexadecimal literal too big" error before the macro is expanded. E.g.:
#define BLAH UINT64_C(0x12345678912);
Is this a bug or is this the intended behaviour?
Thanks,
Dae
The text was updated successfully, but these errors were encountered: