luma.gl/docs/tutorials/hello-cube.mdx
2023-05-29 07:13:17 -04:00

196 lines
5.6 KiB
Plaintext

import {DeviceTabs} from '@site/src/react-luma';
import {HelloCubeExample} from '@site';
# Hello Cube
<DeviceTabs />
<HelloCubeExample />
In this tutorial, we'll pull together several of the techniques we've looked at
in the previous tutorials (and add a few new ones) to render a more complex scene:
a rotating 3D cube. We'll use luma.gl's built-in geometry primitives to create
a cube mesh and handle 3D math using [math.gl](https://math.gl/).
**math.gl** can be installed by running `npm i math.gl`
As always, we'll start with our imports:
```typescript
import {AnimationLoop, Model, CubeGeometry} from '@luma.gl/engine';
import {clear, setParameters} from '@luma.gl/webgl';
import {Matrix4} from '@math.gl/core';
```
Our shaders are somewhat more involved that we've seen before:
```typescript
const vs = `\
attribute vec3 positions;
attribute vec2 texCoords;
uniform mat4 uMVP;
varying vec2 vUV;
void main(void) {
gl_Position = uMVP * vec4(positions, 1.0);
vUV = texCoords;
}
`;
const fs = `\
precision highp float;
uniform sampler2D uTexture;
uniform vec3 uEyePosition;
varying vec2 vUV;
void main(void) {
gl_FragColor = texture2D(uTexture, vec2(vUV.x, 1.0 - vUV.y));
}
`;
```
The two biggest additions to the shaders we've seen before are transforming
the positions to rotate our model and create the 3D perspective effect
(via the `uMVP` matrix) and sampling a texture to color fragments (via the `texture2D` call).
The set up to render in 3D involves a few extra steps compared to the triangles we've been drawing so far:
```typescript
model;
viewMatrix;
mvpMatrix;
override onInitialize({device}) {
const texture = device.createTexture({
data: 'vis-logo.png'
});
const eyePosition = [0, 0, 5];
const viewMatrix = new Matrix4().lookAt({eye: eyePosition});
const mvpMatrix = new Matrix4();
const model = new Model(device, {
vs,
fs,
geometry: new CubeGeometry(),
uniforms: {
uTexture: texture
},
parameters: {
depthTest: true,
depthFunc: GL.LEQUAL
}
});
}
```
Some of the new techniques we're leveraging here are:
- Using `setParameters` to set up depth testing and ensure surfaces occlude each other properly. Compared to setting these parameters directly, the `setParameters` function has the advantage of tracking state and preventing redundant WebGL calls.
- Creating a texture using the `device.createTexture` method. For our purposes, this is as simple as passing a URL to the image location (the image used in this tutorial is available [here](https://github.com/visgl/luma.device/tree/master/examples/api/cubemap/vis-logo.png), but any JPEG or PNG image will do).
- Creating view and MVP matrices using **math.gl**'s `Matrix4` class to store the matrices we'll pass to our shaders to perform the animation and perspective projection.
- Generating attribute data using the `CubeGeometry` class and passing it to our `Model` using the `geometry` property. The geometry will automatically feed vertex position data into the `positions` attribute and texture coordinates (or UV coordinates) into the `texCoords` attribute.
Our `onRender` is similar to what we've seen before with the extra step of setting up the transform matrix and passing it as a uniform to the `Model`:
```typescript
override onRender({device, aspect, tick, model, mvpMatrix, viewMatrix}) {
mvpMatrix.perspective({fovy: Math.PI / 3, aspect})
.multiplyRight(viewMatrix)
.rotateX(tick * 0.01)
.rotateY(tick * 0.013);
clear(device, {color: [0, 0, 0, 1]});
model.setUniforms({uMVP: mvpMatrix})
.draw();
}
```
We use `Matrix4`'s matrix operations to create our final transformation matrix, taking advantage of a few additional parameters that are passed to the `onRender` method:
- `aspect` is the aspect ratio of the canvas and is used to set up the perspective projection.
- `tick` is simply a counter that increments each frame. We use it to drive the rotation animation.
If all went well, you should see a rotating cube with the vis.gl logo painted on each side. The full source code is listed below for reference:
```typescript
import {AnimationLoop, Model, CubeGeometry} from '@luma.gl/engine';
import {clear, setParameters} from '@luma.gl/webgl';
import {Matrix4} from '@math.gl/core';
const vs = `\
attribute vec3 positions;
attribute vec2 texCoords;
uniform mat4 uMVP;
varying vec2 vUV;
void main(void) {
gl_Position = uMVP * vec4(positions, 1.0);
vUV = texCoords;
}
`;
const fs = `\
precision highp float;
uniform sampler2D uTexture;
uniform vec3 uEyePosition;
varying vec2 vUV;
void main(void) {
gl_FragColor = texture2D(uTexture, vec2(vUV.x, 1.0 - vUV.y));
}
`;
const loop = new AnimationLoop({
override onInitialize({device}) {
setParameters(device, {
depthTest: true,
depthFunc: GL.LEQUAL
});
const texture = device.createTexture({data: 'vis-logo.png'});
const eyePosition = [0, 0, 5];
const viewMatrix = new Matrix4().lookAt({eye: eyePosition});
const mvpMatrix = new Matrix4();
const model = new Model(device, {
vs,
fs,
geometry: new CubeGeometry(),
uniforms: {
uTexture: texture
}
});
return {
model,
viewMatrix,
mvpMatrix
};
},
override onRender({device, aspect, tick, model, mvpMatrix, viewMatrix}) {
mvpMatrix
.perspective({fovy: Math.PI / 3, aspect})
.multiplyRight(viewMatrix)
.rotateX(tick * 0.01)
.rotateY(tick * 0.013);
clear(device, {color: [0, 0, 0, 1]});
model.setUniforms({uMVP: mvpMatrix}).draw();
}
});
loop.start();
```