textureSampleBaseClampToEdge() is valid when called on a
`texture_external` texture, but all other sample operations are
invalid. Previously validation was succeeding for these operations on
an external texture in cases where the same operation on a
`texture_2d<f32>` would be allowed. We must therefore explicitly check
for ImageClass::External and disallow invalid sample operations.
It makes sense for a function to be `FnMut + Send`, or `Fn + Send + Sync`,
but not `Fn + Send` because that is overly restrictive for the caller and
the callee.
P010 is a 4:2:0 chroma subsampled planar format, similar to NV12. Each
component uses 16 bits of storage, of which only the high 10 bits are
used. On DX12 this maps to DXGI_FORMAT_P010, and on Vulkan this maps to
G10X6_B10X6R10X6_2PLANE_420_UNORM_3PACK16.
The existing "nv12" gpu test module has been renamed to
"planar_texture", and a new test P010_TEXTURE_CREATION_SAMPLING has
been added similar to the existing NV12_TEXTURE_CREATION_SAMPLING. The
remaining tests in this module have been converted to validation tests,
and now test both NV12 and P010 formats.
These tests cover the external texture binding resource. They ensure
the WGSL functions `textureDimensions()`, `textureLoad()`, and
`textureSampleBaseClampToEdge()` work as expected for both
`TextureView`s and `ExternalTexture`s bound to external texture
resource bindings.
For external textures, they ensure multiplanar YUV formats work as
expected including handling color space transformation. And that the
provided sample and load transforms correctly handle cropping,
flipping, and rotation.
These existing tests cover `Queue::copy_external_image_to_texture()`
which, despite the similar name, is unrelated to the external texture
binding resource.
The following patch is going to add some tests for the latter, so we
are taking this opportunity to rename the former to help avoid
confusion.
This implements the DX12 HAL part of external texture support, which is
the final piece of the puzzle for external textures on DirectX 12.
When creating a pipeline layout, HAL is responsible for mapping a
single BindingType::ExternalTexture bind group entry to multiple
descriptor ranges in the root signature it creates: 3 SRVs (one for
each texture plane) and a CBV for the parameters buffer. Additionally
we must expose the additional bindings to the Naga backend via the
`external_texture_binding_map`. Lastly, when creating a bind group we
write the descriptors for each of these bindings to the heap.
And with that, we can finally enable the `EXTERNAL_TEXTURE` feature
for the dx12 backend.
The WebGPU spec. `createBindGroup` [states][spec-ref] (emphasis mine):
> Device timeline initialization steps:
>
> …
>
> 2. If any of the following conditions are unsatisfied generate a
> validation error, invalidate _bindGroup_ and return.
>
> …
>
> For each `GPUBindGroupEntry` _bindingDescriptor_ in
> _descriptor_.`entries`:
>
> - …
>
> - If the defined binding member for _layoutBinding_ is:
>
> - …
>
> - `buffer`
>
> - …
>
> - If _layoutBinding_.`buffer`.`type` is
>
> - …
>
> - `"storage"` or `"read-only-storage"`
>
> - …
>
> - effective buffer binding size(_bufferBinding_) is a multiple of 4.
[spec-ref]: https://www.w3.org/TR/webgpu/#dom-gpudevice-createbindgroup
We were not implementing this check of effective buffer binding size.
Check that it's a multiple of 4, including
`webgpu:api,validation,createBindGroup:buffer,effective_buffer_binding_size:*`
that this is now implemented as intended.
This adds several fields to `ExternalTextureDescriptor`, specifying
how to handle color space conversion for an external texture. These
fields consist of transfer functions for the source and destination
color spaces, and a matrix for converting between gamuts. This allows
`ImageSample` and `ImageLoad` operations on external textures to
return values in a desired destination color space rather than the
source color space of the underlying planes.
These fields are plumbed through to the `ExternalTextureParams`
uniform buffer from which they are exposed to the shader. Following
conversion from YUV to RGB after sampling/loading from the external
texture planes, the shader uses them to gamma decode to linear RGB in
the source color space, convert from source to destination gamut, then
finally gamma encode to non-linear RGB in the destination color space.