2022-12-15 11:59:54 -05:00

150 lines
4.7 KiB
Markdown

# Bindings
A key responsibility of any GPU API is to enable the application to
set up data so that it can be accessed by shaders. in luma.b
The terminology can be a little confusing. To make it easy to cross-reference other code and
documentation, luma.gl attempts to roughly follow WebGPU / WGLSL conventions. The following terms are used:
- **layouts** - metadata for various shader connection points
- **attribute layout** - actual values for attributes
- **attribute buffers** - actual values for attributes
- **binding layout** - actual values for attributes
- **bindings** - actual values for
## ShaderLayout
Shader code (whether in WGSL or GLSL) contains declarations of attributes,
uniform blocks, samplers etc.
Collectively, these define the data that needs to be bound before the
shader can execute on the GPU. And since the bindings are performed on the CPU,
a certain amount of metadata is needed in JavaScript to describe what data
a specific shader or pair of shaders expects.
luma.gl defines the `ShaderLayout` type to collect a description of a (pair of) shaders. A `ShaderLayout`
is required when creating a `RenderPipeline` or `ComputePipeline`.
Shaders expose numeric bindings, however in applications, named bindings tend to be more convenient.
Note: `ShaderLayout`s can be created manually (by reading the shader code),
or be automatically generated by parsing shader source code or using e.g. the WebGL program introspection APIs.
```typescript
type ShaderLayout = {
attributes: {
{name: 'instancePositions', location: 0, format: 'float32x2', stepMode: 'instance'},
{name: 'instanceVelocities', location: 1, format: 'float32x2', stepMode: 'instance'},
{name: 'vertexPositions', location: 2, format: 'float32x2', stepMode: 'vertex'}
},
bindings?: {
{name: 'projectionUniforms', location: 0, type: 'uniforms'},
{name: 'textureSampler', location: 1, type: 'sampler'},
{name: 'texture', location: 2, type: 'texture'}
}
}
device.createRenderPipeline({
layout,
attributes,
bindings
});
```
### Attributes
```typescript
const shaderLayout: ShaderLayout = {
attributes: [
{name: 'instancePositions', location: 0, format: 'float32x2', stepMode: 'instance'},
{name: 'instanceVelocities', location: 1, format: 'float32x2', stepMode: 'instance'},
{name: 'vertexPositions', location: 2, format: 'float32x2', stepMode: 'vertex'}
],
...
};
```
### Buffer Mapping
Buffer mappings are an optional mechanism enabling more sophisticated GPU buffer layouts,
offering control of GPU buffer offsets, strides, interleaving etc.
Note that buffer layouts are static and need to be defined when a pipeline is created, and all
buffers subsequently supplied to that pipeline need to conform to the buffer mapping.
The bufferMap field in the example below specifies that
```typescript
const shaderLayout: ShaderLayout = {
attributes: [
{name: 'instancePositions', location: 0, format: 'float32x2', stepMode: 'instance'},
{name: 'instanceVelocities', location: 1, format: 'float32x2', stepMode: 'instance'},
{name: 'vertexPositions', location: 2, format: 'float32x2', stepMode: 'vertex'}
],
...
};
device.createRenderPipeline({
shaderLayout,
// We want to use "non-standard" buffers: two attributes interleaved in same buffer
bufferMap: [
{name: 'particles', attributes: [
{name: 'instancePositions'},
{name: 'instanceVelocities'}
]
],
attributes: {},
bindings: {}
});
```
## Model usage
```typescript
new Model(device, {
attributeLayout:
instancePositions: {location: 0, format: 'float32x2', stepMode: 'instance'},
instanceVelocities: {location: 1, format: 'float32x2', stepMode: 'instance'},
vertexPositions: {location: 2, format: 'float32x2', stepMode: 'vertex'}
};
})
```
WGLSL vertex shader
```rust
struct Uniforms {
modelViewProjectionMatrix : mat4x4<f32>;
};
[[binding(0), group(0)]] var<uniform> uniforms : Uniforms; // BINDING 0
struct VertexOutput {
[[builtin(position)]] Position : vec4<f32>;
[[location(0)]] fragUV : vec2<f32>;
[[location(1)]] fragPosition: vec4<f32>;
};
[[stage(vertex)]]
fn main([[location(0)]] position : vec4<f32>,
[[location(1)]] uv : vec2<f32>) -> VertexOutput {
var output : VertexOutput;
output.Position = uniforms.modelViewProjectionMatrix * position;
output.fragUV = uv;
output.fragPosition = 0.5 * (position + vec4<f32>(1.0, 1.0, 1.0, 1.0));
return output;
}
```
WGLS FRAGMENT SHADER
```rust
[[group(0), binding(1)]] var mySampler: sampler; // BINDING 1
[[group(0), binding(2)]] var myTexture: texture_2d<f32>; // BINDING 2
[[stage(fragment)]]
fn main([[location(0)]] fragUV: vec2<f32>,
[[location(1)]] fragPosition: vec4<f32>) -> [[location(0)]] vec4<f32> {
return textureSample(myTexture, mySampler, fragUV) * fragPosition;
}
```