Currently we generate code to convert floating point values to integers
using constructor-style casts in HLSL, static_cast in MSL, and
OpConvertFToS/OpConvertFToU instructions in SPV. Unfortunately the
behaviour of these operations is undefined when the original value is
outside of the range of the target type.
This patch avoids undefined behaviour by first clamping the value to
be inside the target type's range, then performing the cast.
Additionally, we specifically clamp to the minimum and maximum values
that are exactly representable in both the original and the target
type, as per the WGSL spec[1]. Note that these may not be the same as
the minimum and maximum values of the target type.
We additionally must ensure we clamp in the same manner for
conversions during const evaluation. Lastly, although not part of the
WGSL spec, we do the same for casting from F64 and/or to I64 or U64.
[1] https://www.w3.org/TR/WGSL/#floating-point-conversion
With the only caveat that device creation will now panic if the `wgsl` feature is not enabled, `InstanceFlags::VALIDATION_INDIRECT_CALL` is set and the device supports `DownlevelFlags::INDIRECT_EXECUTION`.
This avoids a panic due to f16::to_u32()/f16::to_u64() returning None
when the value of the f16 is <= -1.0. The correct behaviour when
converting from a floating point to an integer when the value is out
of range is to clamp to the nearest value that is representable in
both the source and destination type. ie zero for negative numbers.
Require that the `level` operand of an `ImageQuery::Size` expression
is `i32` or `u32`, per spec.
Without this fix, the following WGSL:
@group(0) @binding(0)
var t: texture_1d<f32>;
fn f() -> u32 {
return textureDimensions(t, false);
}
produces the following invalid HLSL:
Texture1D<float4> t : register(t0);
uint NagaMipDimensions1D(Texture1D<float4> tex, uint mip_level)
{
uint4 ret;
tex.GetDimensions(mip_level, ret.x, ret.y);
return ret.x;
}
uint f()
{
return NagaMipDimensions1D(t, false);
}
ie the second-most negative value minus 1.
The most negative value of an integer type is not directly expressible
in WGSL, as it relies on applying the unary negation operator to a
value which is one larger than the largest value representable by the
type.
To avoid this issue for i32, we negate the required value as an
AbstractInt before converting to i32. AbstractInt, being 64 bits, is
capable of representing the maximum i32 value + 1.
However, for i64 this is not the case. Instead this patch makes us
express the mimimum i64 value as the second most negative i64 value,
minus 1, ie `-9223372036854775807li - 1li`, thereby avoiding the
issue.
Add support for `naga::ir::MathFunction::Cross` to
`naga::proc::constant_evaluator`.
In the tests:
- Change `naga/tests/in/wgsl/cross.wgsl` to use more interesting
argument values. Rather than passing the same vector twice, which
yields a cross product of zero, pass in the x and y unit vectors,
whose cross product is the z unit vector. Update snapshot output.
- Replace `validation::bad_cross_builtin_args` with a new test,
`builtin_cross_product_args`, that is written more in the style of
the other tests in this module, and does not depend on the WGSL
front end. Because this PR changes the behavior of the constant
evaluator, this test stopped behaving as expected.
- In `wgsl_errors::check`, move a `panic!` out of a closure so that
the `#[track_caller]` attribute works properly.
[naga spv-out msl-out hlsl-out] Make infinite loop workaround count down instead of up
To avoid generating code containing infinite loops, and therefore
incurring the wrath of undefined behaviour, we insert a counter into
each loop that will break after 2^64 iterations. This was previously
implemented as two u32 variables counting up from zero.
We have been informed that this construct can cause certain Intel
drivers to hang. Instead, we must count down from u32::MAX. Counting
down is more fun, anyway.
Co-authored-by: Erich Gubler <erichdongubler@gmail.com>
And from either AbstractFloat or AbstractInt to bool.
AbstractFloat to integer was presumably not implemented as *automatic*
conversion between those types is not allowed. However, explicit
conversions (for example `i32(1.0)`) are allowed, and are implemented
during const evaluation using the same code path in
ConstantEvaluator::cast().
The integer conversion constructors treat AbstractFloats as any other
floating point type, meaning we must follow the scalar floating point
to integral conversion algorithm [1]. This specifies:
To convert a floating point scalar value X to an integer scalar
type T:
* If X is a NaN, the result is an indeterminate value in T.
* If X is exactly representable in the target type T, then the
result is that value.
* Otherwise, the result is the value in T closest to truncate(X)
and also exactly representable in the original floating point
type.
Fortunately a rust cast satisfies all these conditions apart from the
result being exactly representable in the original floating point
type. However, as i32::MIN, i32::MAX, u32::MIN, and u32::MAX are all
representable by f64 (the underlying type of AbstractFloat), this is
not an issue.
For i64 and u64 using a rust cast will not meet that requirement, but
as these types are not present in the WGSL spec we are free to ignore
that.
[1] https://www.w3.org/TR/WGSL/#floating-point-conversion
A vector constructor with multiple arguments is represented in IR as a
Compose expression. These can be nested, for example due to code such
as this: `vec4(1, vec3(2, vec2(3, 4)))`. The total number of
components of each argument must match the vector size, meaning
`vec4(1, 2, 3, 4, 5)` is invalid. The validation pass already catches
such occurences. However, validation runs after constant evaluation,
meaning this can cause issues if the expression is evaluated as part
of a const expression.
When applying an operation to a vector compose during const
evaluation, we typically "flatten" the vector using the function
`proc::flatten_compose()`. This silently truncates the list of
components to the size implied by the type, meaning invalid code is
accepted.
This patch validates that the total number of components in a Compose
expression, once flattened, matches the size implied by the type. We
do so once as each expression is registered to avoid needing to handle
this each time the expression is used in a const expression.
For nested compose expressions, the inner expressions will be registed
before the outer ones. This means we can trust that the size implied
by the inner expression's type is correct, as it will have already
been validated, and can therefore avoid recursing through each nested
expression when registering the outer expression.
Add a new method to the `naga::common::wgsl::TryToWgsl` trait,
`to_wgsl_for_diagnostics`, which always returns a `String`, falling
back to the `Debug` representation of the type if necessary.
Provide a custom implementation for `Scalar` that shows abstract types
in a reasonable way.
Provide default implementations for the `write_non_wgsl_inner` and
`write_non_wgsl_scalar` methods of `naga::common::wgsl::TypeContext`.
We will be implementing this trait for various types in the WGSL front
end and the validator, and almost all of these applications will need
basically identical implementations of these methods.
Add new default methods, `type_to_string`, `type_inner_to_string`, and
`resolution_to_string` to the `naga::common::wgsl::TypeResolution`
trait, for generating WGSL source strings from these types.
In `naga::common::wgsl::types`, move the type parameter `<W: Write>`
from the `TypeContext` trait itself to the individual methods of the
trait. It is very unlikely for Naga to need to be able to implement
`TypeContext`, but only for one particular kind of output stream.
The motivation for this change is that the old parameterization makes
it awkward to provide utility methods for generating `String`s on the
trait, which we do in subsequent commits. In order to write to a
`String`, such utility methods need `Self` to implement
`TypeContext<String>`, so you can add bounds to the methods like this:
fn type_to_string(..., out: &mut W) -> ...
where Self: TypeContext<String>
{
... self.write(..., &mut string_buf)?;
}
That will compile, but if you try to actually call it, Rust gets
confused. Remember, the above is not a method on
`TypeContext<String>`, it's a method on `TypeContext<W>` that uses
`TypeContext<String>` internally. So when you write
`ctx.type_to_string(...)`, Rust needs to decide whether `ctx`
implements `TypeContext<W>` for some completely unspecified `W`, and
asks for type annotations.
You could supply type annotations, but this would be basically
supplying some never-used type that implements `core::fmt::Write`.
Instead of
ctx.type_to_string(handle)
you'd have to write
TypeContext::<String>::type_to_string(ctx, handle)
which is dumb.
(I don't *think* this explanation belongs in the code, since it's an
explanation of a design *not* used, replaced by a design that's pretty
natural --- so I'll leave it here.)