Closes#17448Closes#13133
This PR adds an a new Oxide target for `wasm32-wasip1-threads`:
`@tailwindcss/oxide-wasm32-wasi`. The goal of this is to enable more
environments to run Oxide, including (but not limited to) StackBlitz.
We're making use of `napi-rs`'s upcoming v3 features to simplify the
setup here, meaning `napi-rs` will configure the WASM target and create
an npm package that works across Node and browser environments.
## MacOS AArch64 issues
While setting up an integration test for the new WASM target, I ran into
an issue where FS reads where not terminating on macOS. After some
research I found this to be a limitation of the Node.js container
interface right now, see: https://github.com/nodejs/node/issues/47193
### Windows issues
We also found that the Node.js wasi container does not properly support
Windows: https://github.com/nodejs/uvwasi/issues/11
For now we, it's probably best for MacOS AArch64 users and Windows users
to use the native modules instead.
## Test plan
The `@tailwindcss/oxide-wasm32-wasi` npm package can be built locally
via `pnpm build` and then run with the Oxide API. A usage example can be
taken from the newly added integration test.
Furthermore this was tested to work as a polyfill on StackBlitz:
https://stackblitz.com/edit/vitejs-vite-uks3gt5p
[ci-all]
---------
Co-authored-by: Robin Malfait <malfait.robin@gmail.com>
This PR is a follow-up PR for:
https://github.com/tailwindlabs/tailwindcss/pull/17433
In the other PR we allow scanning CSS files for extracting usages of CSS
variables. This is important for `.module.css` files that reference
these variables but aren't in the same big AST of the main CSS file.
This PR also makes sure to watch for changes in those registered CSS
files and re-extract the variables when they change.
This PR took a bit longer than expected because I was trying to make
sure that writing to `./dist/out.css` works without infinite-looping
(e.g.: we had issues with this in Tailwind CSS v3 with webpack).
But I couldn't reproduce the issue at all. I did had some code that
tried to detect if the CSS file contained license headers and skip in
(because then it's very likely an output CSS file) but even without it
the tests were fine.
I setup integration tests with `@tailwindcss/cli` itself, and with tools
that use webpack. Added a test for Next.js, and a dedicated webpack test
as well.
Even without tests, locally, I couldn't reproduce an infinite loop due
to changes in an output CSS file...
Eventually dropped the code that tries to detect output CSS files.
One thing to keep in mind is that if you change any of your "main" CSS
files, then we will trigger a full rebuild anyway, so this change is
only required for unrelated CSS files (like CSS module files) that use
CSS variables.
## Test plan
1. Added integration tests for the CLI and Next.js
2. Added new dedicated test for webpack
This PR adds a new source detection feature: `@source not "…"`. It can
be used to exclude files specifically from your source configuration
without having to think about creating a rule that matches all but the
requested file:
```css
@import "tailwindcss";
@source not "../src/my-tailwind-js-plugin.js";
```
While working on this feature, we noticed that there are multiple places
with different heuristics we used to scan the file system. These are:
- Auto source detection (so the default configuration or an `@source
"./my-dir"`)
- Custom sources ( e.g. `@source "./**/*.bin"` — these contain file
extensions)
- The code to detect updates on the file system
Because of the different heuristics, we were able to construct failing
cases (e.g. when you create a new file into `my-dir` that would be
thrown out by auto-source detection, it'd would actually be scanned). We
were also leaving a lot of performance on the table as the file system
is traversed multiple times for certain problems.
To resolve these issues, we're now unifying all of these systems into
one `ignore` crate walker setup. We also implemented features like
auto-source-detection and the `not` flag as additional _gitignore_ rules
only, avoid the need for a lot of custom code needed to make decisions.
High level, this is what happens after the now:
- We collect all non-negative `@source` rules into a list of _roots_
(that is the source directory for this rule) and optional _globs_ (that
is the actual rules for files in this file). For custom sources (i.e
with a custom `glob`), we add an allowlist rule to the gitignore setup,
so that we can be sure these files are always included.
- For every negative `@source` rule, we create respective ignore rules.
- Furthermore we have a custom filter that ensures files are only read
if they have been changed since the last time they were read.
So, consider the following setup:
```css
/* packages/web/src/index.css */
@import "tailwindcss";
@source "../../lib/ui/**/*.bin";
@source not "../../lib/ui/expensive.bin";
```
This creates a git ignore file that (simplified) looks like this:
```gitignore
# Auto-source rules
*.{exe,node,bin,…}
*.{css,scss,sass,…}
{node_modules,git}/
# Custom sources can overwrite auto-source rules
!lib/ui/**/*.bin
# Negative rules
lib/ui/expensive.bin
```
We then use this information _on top of your existing `.gitignore`
setup_ to resolve files (i.e so if your `.gitignore` contains rules e.g.
`dist/` this line is going to be added _before_ any of the rules lined
out in the example above. This allows negative rules to allow-list your
`.gitignore` rules.
To implement this, we're rely on the `ignore` crate but we had to make
various changes, very specific, to it so we decided to fork the crate.
All changes are prefixed with a `// CHANGED:` block but here are the
most-important ones:
- We added a way to add custom ignore rules that _extend_ (rather than
overwrite) your existing `.gitignore` rules
- We updated the order in which files are resolved and made it so that
more-specific files can allow-list more generic ignore rules.
- We resolved various issues related to adding more than one base path
to the traversal and ensured it works consistent for Linux, macOS, and
Windows.
## Behavioral changes
1. Any custom glob defined via `@source` now wins over your `.gitignore`
file and the auto-content rules.
- Resolves#16920
3. The `node_modules` and `.git` folders as well as the `.gitignore`
file are now ignored by default (but can be overridden by an explicit
`@source` rule).
- Resolves#17318
- Resolves#15882
4. Source paths into ignored-by-default folders (like `node_modules`)
now also win over your `.gitignore` configuration and auto-content
rules.
- Resolves#16669
5. Introduced `@source not "…"` to negate any previous rules.
- Resolves#17058
6. Negative `content` rules in your legacy JavaScript configuration
(e.g. `content: ['!./src']`) now work with v4.
- Resolves#15943
7. The order of `@source` definitions matter now, because you can
technically include or negate previous rules. This is similar to your
`.gitingore` file.
9. Rebuilds in watch mode now take the `@source` configuration into
account
- Resolves#15684
## Combining with other features
Note that the `not` flag is also already compatible with [`@source
inline(…)`](https://github.com/tailwindlabs/tailwindcss/pull/17147)
added in an earlier commit:
```css
@import "tailwindcss";
@source not inline("container");
```
## Test plan
- We added a bunch of oxide unit tests to ensure that the right files
are scanned
- We updated the existing integration tests with new `@source not "…"`
specific examples and updated the existing tests to match the subtle
behavior changes
- We also added a new special tag `[ci-all]` that, when added to the
description of a PR, causes the PR to run unit and integration tests on
all operating systems.
[ci-all]
---------
Co-authored-by: Philipp Spiess <hello@philippspiess.com>
When Tailwind is loaded in a Node Worker thread, it currently causes a
segmentation fault on Linux when the thread exits. This is due to a
longstanding issue in Rust that affects all native modules:
https://github.com/rust-lang/rust/issues/91979. I reported this years
ago but unfortunately it is still not fixed, and seems to have gotten
worse in Rust 1.83.0 and later. Looks like Tailwind recently updated
Rust versions and this issue started appearing when run in tools like
Parcel that use worker threads.
The workaround is to prevent the native module from ever being unloaded.
One way to do that is to always load the native module in the main
thread in addition to workers, but this is hard to enforce.
@Brooooooklyn found another method, which is to use a linker option for
this. I tested this on an Ubuntu system and verified it fixed the issue.
You can test with the following script.
```js
// test.js
const {Worker} = require('worker_threads');
new Worker('./worker.js');
// worker.js
require('@tailwindcss/oxide');
```
Without this change, a segmentation fault will occur.
---------
Co-authored-by: Jordan Pittman <jordan@cryptica.me>
This PR improves the integration tests in two ways:
1. Make the integration tests more reliable and thus less flakey
2. Make the integration tests faster (by introducing concurrency)
Tried a lot of different things to make sure that these tests are fast
and stable.
---
The biggest issue we noticed is that some tests are flakey, these are
tests with long running dev-mode processes where watchers are being used
and/or dev servers are created.
To solve this, all the tests that spawn a process look at stdout/stderr
and wait for a message from the process to know whether we can start
making changes.
For example, in case of an Astro project, you get a `watching for file
changes` message. In case of Nuxt project you can wait for an `server
warmed up in` and in case of Next.js there is a `Ready in` message.
These depend on the tools being used, so this is hardcoded per test
instead of a magically automatic solution.
These messages allow us to wait until all the initial necessary work,
internal watchers and/or dev servers are setup before we start making
changes to the files and/or request CSS stylesheets before the server(s)
are ready.
---
Another improvement is how we setup the dev servers. Before, we used to
try and get a free port on the system and use a `--port` flag or a
`PORT` environment variable. Instead of doing this (which is slow), we
rely on the process itself to show a URL with a port. Basically all
tools will try to find a free port if the default port is in use. We can
then use the stdout/stderr messages to get the URL and the port to use.
To reduce the amount of potential conflicts in ports, we used to run
every test and every file sequentially to basically guarantee that ports
are free. With this new approach where we rely on the process, I noticed
that we don't really run into this issue again (I reran the tests
multiple times and they were always stable)
<img width="316" alt="image"
src="https://github.com/user-attachments/assets/b75ddab4-f919-4995-85d0-f212b603e5c2"
/>
Note: these tests run Linux, Windows and macOS in this branch just for
testing purposes. Once this is done, we will only run Linux tests on PRs
and run all 3 of them on the `next` branch.
We do make the tests concurrent by default now, which in theory means
that there could be conflicts (which in practice means that the process
has to do a few more tries to find a free port). To reduce these
conflicts, we split up the integration tests such that Vite, PostCSS,
CLI, … tests all run in a separate job in the GitHub actions workflow.
<img width="312" alt="image"
src="https://github.com/user-attachments/assets/fe9a58a1-98eb-4d9b-8845-a7c8a7af5766"
/>
Comparing this branch against the `next` branch, this is what CI looks
like right now:
| `next` | `feat/improve-integration-tests` |
| --- | --- |
| <img width="594" alt="image"
src="https://github.com/user-attachments/assets/540d21eb-ab03-42e8-9f6f-b3a071fc7635"
/> | <img width="672" alt="image"
src="https://github.com/user-attachments/assets/8ef2e891-08a1-464b-9954-4153174ebce7"
/> |
There also was a point in time where I introduced sequential tests such
that all spawned processes still run after each other, but so far I
didn't run into issues if we keep them concurrent so I dropped that
code.
Some small changes I made to make things more reliable:
1. When relying on stdout/stderr messages, we split lines on `\n` and we
strip all the ANSI escapes which allows us to not worry about special
ANSI characters when finding the URL or a specific message to wait for.
2. Once a test is done, we `child.kill()` the spawned process. If that
doesn't work, for whatever reason, we run a `child.kill('SIGKILL')` to
force kill the process. This could technically lead to some memory or
files not being cleaned up properly, but once CI is done, everything is
thrown away anyway.
3. As you can see in the screenshots, I used some nicer names for the
workflows.
| `next` | `feat/improve-integration-tests` |
| --- | --- |
| <img width="276" alt="image"
src="https://github.com/user-attachments/assets/e574bb53-e21b-4619-9cdb-515431b255b9"
/> | <img width="179" alt="image"
src="https://github.com/user-attachments/assets/8bc75119-fb91-4500-a1d0-bd09f74c93ad"
/> |
They also look a bit nicer in the PR overview as well:
<img width="929" alt="image"
src="https://github.com/user-attachments/assets/04fc71fc-74b0-4e7c-9047-2aada664efef"
/>
The very last commit just filters out Windows and macOS tests again for
PRs (but they are executed on the `next` branch.
---
### Nest steps
I think for now we are in a pretty good state, but there are some things
we can do to further improve everything (mainly make things faster) but
aren't necessary. I also ran into issue while trying it so there is more
work to do.
1. More splits — instead of having a Vite folder and PostCSS folder, we
can go a step further and have folders for Next.js, Astro, Nuxt, Remix,
…
2. Caching — right now we have to run the build step for every OS on
every "job". We can re-use the work here by introducing a setup job that
the other jobs rely on. @thecrypticace and I tried it already, but were
running into some Bun specific Standalone CLI issues when doing that.
3. Remote caching — we could re-enable remote caching such that the
`build` step can be full turbo (e.g.: after a PR is merged in `next` and
we run everything again)
* separate `stable` and `oxide` mode (package.json in this case)
* drop `install` script (we use a workspace now)
* change required engine to 16
* enable OXIDE by default
* ignore generated `oxide` files
* splitup package.json scripts into "public" and "private" scripts
Not ideal of course, but this should make it a tiny bit easier to know
which scripts _you_ as a developer / contributor have to run.
* drop `workspaces` from the `stable` engine
* drop `oxide` related build files from the `stable` engine
* drop `oxide` engine specific dependencies from the `stable` engine
* use the `oxide-node-api-shim` for the `stable` engine
* add little script to swap the engines
* drop `oxide:build` from `turbo` config
* configure `ci` for `stable` and `oxide` engines
- rename `nodejs.yml` -> `ci.yml`
- add `ci-stable.yml` (for stable mode and Node 12)
- ensure to use the `stable` engine in the `ci-stable.yml` workflow
- drop `oxide:___` specific scripts
* rename `release-insiders` to `release-insiders-stable`
This way we will be able to remove all files that contain `stable` once
we are ready.
* rename `release-insiders-oxide` to just `release-insiders`
* cleanup insider related workflows
* rename `release` -> `release-stable`
* rename `release-oxide` -> `release`
* change names of release workflows
* drop `oxide-` prefix from jobs
* inline node versions
* do not use `turbo` for the stable build
Can't use it because we don't have a workspace in the stable build.
* re-rename CI workflow
* encode default engine in relevant `package.json` files
* make Node 12 work
* increase `node-version` matrix
* make release workflows explicit (per engine)
* add `Oxide` to workflow name
* add integration tests for the `oxide` engine
* add integration tests for the `stable` engine
* run `oxide` integrations against node `18`
* run `stable` integration tests against node 18
We should test node 12 for tailwindcss, but integrations itself can run
against a newer version. In fact, we always ran them against node 16.
* use `localhost` instead of `0.0.0.0`
* ensure `webpack-4` works on Node 18
* run relese scripst directly
Instead of going via `npm`. It's a bit nicer and quicker!
* drop unused scripts
* sync package-lock.json
* ensure to generate the plugin list before running `jest`
We _could_ use an `npm run pretest`, but then you can't run `jest`
directly anymore (which is required for some tools like vscode
extensions).
* cleanup npm scripts
* drop pretend comments
* fix typo
* add `build:rust` as a pre-jest run script
* temporarily disable workflows
* add oxide
Our Rust related parts
* use oxide
- Setup the codebase to be able to use the Rust parts based on an
environment variable: `OXIDE=1`.
- Setup some tests that run both the non-Rust and Rust version in the
same test.
- Sort the candidates in a consistent way, to guarantee the order for
now (especially in tests).
- Reflect sorting related changes in tests.
- Ensure tests run in both the Rust and non-Rust version. (Some tests
are explicitly skipped when using the Rust version since we haven't
implemented those features yet. These include: custom prefix,
transformers and extractors).
- `jest`
-`OXIDE=1 jest`
* remove into_par_iter where it doesn't make sense
* cargo fmt
* wip
* enable tracing based on `DEBUG` env
* improve CI for the Oxide build
* sort test output
This happened because the sorting happens in this branch, but changes
happened on the `master` branch.
* add failing tests
I noticed that some of the tests were failing, and while looking at
them, it happened because the tests were structured like this:
```html
<div
class="
backdrop-filter
backdrop-filter-none
backdrop-blur-lg
backdrop-brightness-50
backdrop-contrast-0
backdrop-grayscale
backdrop-hue-rotate-90
backdrop-invert
backdrop-opacity-75
backdrop-saturate-150
backdrop-sepia
"
></div>
```
This means that the class names themselves eventually end up like this: `backdrop-filter-none\n`
-> (Notice the `\n`)
/cc @thecrypticace
* fix range to include `\n`
* Include only unique values for tests
Really, what we care about most is that the list contains every expected candidate. Not necessarily how many times it shows up because while many candidates will show up A LOT in a source text we’ll unique them before passing them back to anything that needs them
* Fix failing tests
* Don’t match empty arbitrary values
* skip tests in oxide mode regarding custom separators in arbitrary variants
* re-enable workflows
* use `@tailwindcss/oxide` dependency
* publish `tailwindcss@oxide`
* drop prepublishOnly
I don't think we actually need this anymore (or even want because this
is trying to do things in CI that we don't want to happen. Aka, build
the Oxide Rust code, it is already a dependency).
* WIP
* Defer to existing CLI for Oxide
* Include new compiled typescript stuff when publishing
* Move TS to ./src/oxide
* Update scripts
* Clean up tests for TS
* copy `cli` to `oxide/cli`
* make CLI files TypeScript files
* drop --postcss flag
* setup lightningcss
* Remove autoprefixer and cssnano from oxide CLI
* cleanup Rust code a little bit
- Drop commented out code
- Drop 500 fixture templates
* sort test output
* re-add `prepublishOnly` script
* bump SWC dependencies in package-lock.json
* pin `@swc` dependencies
* ensure to install and build oxide
* update all GitHub Workflows to reflect Oxide required changes
* sort `content-resolution` integration tests
* add `Release Insiders — Oxide`
* setup turbo repo + remote caching
* use `npx` to invoke `turbo`
* setup unique/proper package names for integration tests
* add missing `isomorphic-fetch` dependency
* setup integration tests to use `turborepo`
* scope tailwind tasks to root workspace
* re-enable `node_modules` cache for integration tests
* re-enable `node_modules` cache for main CI workflow
* split cache for `main` and `oxide` node_modules
* fix indent
* split install dependencies so that they can be cached individually
* improve GitHub actions caching
* use correct path for oxide node_modules (crates/node)
* ensure that `cargo install` always succeeds
cargo install X, on CI will fail if it already exists.
* figure out integration tests with turbo
* tmp: use `npm` instead of `turbo`
* disable `fail-fast`
This will allow us to run integration tests so that it still caches the
succesful ones.
* YAML OH YAML, Y U WHITESPACE SENSITIVE
* copy the oxide-ci workflow to release-oxide
* make `oxide-ci` a normal CI workflow
Without publishing
* try to cache cargo and node_modules for the oxide build
* configure turbo to run scripts in the root
* explicitly skip failing test for the Oxide version
* run oxide tests in CI
* only use build script for root package
* sync package-lock.json
* do not cache node_modules for each individual integration
* look for hoisted `.bin`
* use turbo for caching build tailwind css in integration tests
* Robin...
* try to use the local binary first
* skip installing integration test dependencies
Should already be installed due to workspace usage
* Robin...
* drop `output.clean`
* explicitly add `mini-css-extract-plugin`
* drop oxide-ci, this is tested by proxy
* ensure oxide build is used in integration tests
This will ensure the `@tailwindcss/oxide` dependency is available
(whether we use it or not).
* setup Oxide shim in insiders release
* add browserslist dependency
* use `install:all` script name
Just using `install` as a script name will be called when running
`npm install`.
Now that we marked the repo as a `workspace`, `npm install` will run
install in all workspaces which is... not ideal.
* tmp: enable insiders release in PRs
Just to check if everything works before merging. Can be removed once
tested.
* don't cache node_modules?
I feel there is some catch 22 going on here.
We require `npm install` to build the `oxide/crates/node` version.
But we also require `oxide/crates/node` for the `npm install` becaus of
the dependency: `"@tailwindcss/oxide": "file:oxide/creates/node"`
* try to use `oxide/crates/node` as part of the workspace
* let's think about this
Let's try and cache the `node_modules` and share as much as possible.
However, some scripts still need to be installed specific to the OS.
Running `npm install` locally doesn't throw away your `node_modules`,
so if we just cache `node_modules` but also run `npm install` that
should keep as much as possible and still improve install times since
`node_modules` is already there.
I think.
* ensure generated `index.js` and `index.d.ts` files are considered outputs
* use `npx napi` instead of `napi` directly
* include all `package-lock.json` files
* normalize caching further in all workflows
* drop nested `package-lock.json` files
* `npm uninstall mini-css-extract-plugin && npm install mini-css-extract-plugin --save-dev`
* bump webpack-5 integration tests dependencies
* only release insiders on `master` branch
* tmp: let's figure out release insiders oxide
* fix little typo
* use Node 18 for Oxide Insiders
* syncup package-lock.json
* let's try node 16
Node 18 currently fails on `Build x86_64-unknown-linux-gnu (OXIDE)`
Workflow.
Install Node.JS output:
```
Environment details
Warning: /__t/node/18.13.0/x64/bin/node: /lib64/libm.so.6: version `GLIBC_2.27' not found (required by /__t/node/18.13.0/x64/bin/node)
/__t/node/18.13.0/x64/bin/node: /lib64/libc.so.6: version `GLIBC_2.25' not found (required by /__t/node/18.13.0/x64/bin/node)
/__t/node/18.13.0/x64/bin/node: /lib64/libc.so.6: version `GLIBC_2.28' not found (required by /__t/node/18.13.0/x64/bin/node)
/__t/node/18.13.0/x64/bin/node: /lib64/libstdc++.so.6: version `CXXABI_1.3.9' not found (required by /__t/node/18.13.0/x64/bin/node)
/__t/node/18.13.0/x64/bin/node: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /__t/node/18.13.0/x64/bin/node)
/__t/node/18.13.0/x64/bin/node: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by /__t/node/18.13.0/x64/bin/node)
Warning: node: /lib64/libm.so.6: version `GLIBC_2.27' not found (required by node)
node: /lib64/libc.so.6: version `GLIBC_2.25' not found (required by node)
node: /lib64/libc.so.6: version `GLIBC_2.28' not found (required by node)
node: /lib64/libstdc++.so.6: version `CXXABI_1.3.9' not found (required by node)
node: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by node)
node: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by node)
```
* bump some Node versions
* only release oxide insiders on `master` branch
* don't cache `npm`
* bump napi-rs
Co-authored-by: Jordan Pittman <jordan@cryptica.me>
Co-authored-by: Adam Wathan <4323180+adamwathan@users.noreply.github.com>
* remove unnecessary download links
GitHub already shows them in a table right below it.
* detach `npm run style` from `npm run test`
* decouple lint from test in workflows
Which means that we don't need to do the crazy linking in certain
workflows.
* hoist the `CI` environment variable
* create dedicated `lint` job
The `lint` tests will run against source files and should not be
dependant on a specific node version. Instead of running the `npm run
style` on every node version we use, we can and should only run it once.
* remove `prettier-plugin-tailwindcss`
As long as we use older versions of node/npm where we can't have
ourselves as a dependency, it is a bit of a mess to maintain properly
sorted html in tests.
Let's remove it for now until we have a better solution!
* calculate tag for the release
This is based on the name we use in the version e.g.: `3.2.0-beta.2`. If
no name can be found in the version, we will default to `latest`
* ignore node_modules caching for now
* Update lockfile
* Tweak formatting
* Refactor content path parsing
* Allow resolving content paths relative to the config file
* Include resolved symlinks as additional content paths
* Update changelog
* Work on suite of tests for content resolution
* reformat integration test list
* Move content resolution tests to integration
* Update future and experimental types
Restrict the GitHub token permissions only to just what is required and make them read-only where possible.
Signed-off-by: neilnaveen <42328488+neilnaveen@users.noreply.github.com>
* introduce `public` folder
This can contain all of the `public` functions we want to expose.
This will be a bit nicer for example when you want to use
internal/private functions (we use some in the vscode intellisense
plugin).
* use public `resolveConfig` function
* expose resolveConfig in the root
This will use the resolveConfig we expose from the `public` folder. We
can probably generate these as well.
* make `colors` public
* make `default config` public
* make `default theme` public
* make `create plugin` public
* update to public paths
* remove `@tailwindcss/aspect-ratio` from tests
This should be tested in its own repo instead.
* remove `@tailwindcss/aspect-ratio` as a dependency
* drop `Build` step from CI
The build step is not a prerequisite anymore for running the tests. When
we want to release a new (insiders) release, the `prepublishOnly` step
will be executed for us.
Before this change, it would have been executed twice:
- Once before the tests
- Once before the actual release
* improve paths for caching purposes
* add pretest scrip for generating the plugin list
Now that we can use `SWC`, automatically generating the plugin list
before running the tests is super fast and you don't even have to think
about it anymore!
* setup integration tests
* fix rgb color syntax
* ensure integration tests always exit
If for any reason the integration tests fail, then it will run forever
on CI (~2hours or something). The `--forceExit` is not ideal but it will
prevent long running processes.
* fix incorrect test
We were never properly waiting for the command to finish.
* handle AbortError properly
In CI, when an AbortController gets aborted an error is thrown
(AbortError). If we don't catch it properly then it will "leak" and the
test will fail.
* improve IO functions
* quit integration tests after 10seconds
* only test a few integrations
* test all integrations using matrix
This will cancel other builds when one fails, it will also separate the
output per integration which can be useful especially now that we are
still figuring things out.
* rename `build` to `test`
* add --verbose flag to receive output in the console
* when reading stdout or stderr, wait a certain about to ensure stability
Debouncing for 200ms means that if another message comes in within those
200ms we delay the execution of the callback.
* simplify workflow
* use terminal output instead of disk events
* cache node_modules for integrations
* empty commit, to test cache hits