There is a potential conrner case where we may end up with the wal2json library in the `postgres/wal2json/VERSION/file` but not in `postgres/wal2json/wal2json.so`.
Not sure exactly how likely this could be, but thechnically it is possible that the download succeeds and `cp "../postgres/wal2json/$VERSION/$FILE_NAME" "$FILE_TO_USE"` does not. The next attempt the copy would not be attempted.
This fix ensures the copy always happens
Fixes#1009 by partially reverting #1002. We need to make a 21.6.2 release soon and I didn't have time to dig into why Kafka upgrades were failing so reverting for safety for now.
This adds a breaking change notice to our changelog regarding custom plugins:
The frontend bundle will be loaded asynchronously. This is a breaking change that can affect custom plugins that access certain globals in the django template. Please see https://forum.sentry.io/t/breaking-frontend-changes-for-custom-plugins/14184 for more information
This PR is a try to update most middlewares used by Sentry to latest stable versions.
[As mentioned in the forum](https://forum.sentry.io/t/middleware-version-compatibility/14353/2) I didn't update Postgresql & Clickhouse due to known issues.
I also :
- changed versions to immutable tags (MAJOR.MINOR.PATCH semver versions when possible).
- changed nginx to the Alpine variant
@billyvg has a breaking change coming up soon so let's use this as an opportunity to add a changelog. This changelog will only capture important announcements (such as this breaking change over at Sentry) and changes in this repo for now.
We will use Change Data Capture to stream WAL updates from postgres into clickhouse so that features like issue search will be able to join event data and metadata (from postgres) through Snuba.
This requires the followings:
A logical replicaiton plugin to be installed in postgres (https://github.com/getsentry/wal2json)
A service to run that streams from the replication log to Kafka (https://github.com/getsentry/cdc)
Datasets in Snuba.
This PR is preparing postgres to stream updates via the replication log.
The idea is to
download the the replication log plugin binary during install.sh
mount a volume with the binary when starting postgres
providing a new entrypoint to postgres that ensures everything is correctly configured.
There is a difference between how this is set up and how we do the same in the development environment.
In the development environment we download the library from the entrypoint itself and store it in a persistent volume, so we do not have to download it every time.
Unfortunately this does not work here as the postgres image is postgres:9.6 while it is postgres:9.6-alpine. This one does not come with either wget or curl. I don't think installing that in the entrypoint would be a good idea, so the download happens in install.sh. I actually think this way is safer so we never depend on connectivity for postgres to start properly.
Add basic healthchecks on Zookeeper & Kafka containers to have a view on container status. These checks are quite basic because I have no knowledge at all on these components.
Co-authored-by: Sébastien Pierre <spi@dfakto.com>
This is a stop-gap solution to #918 until we figure out the negative DNS caching issue inside `relay`. This may also be due to Docker Compose making some assumptions/optimizations/limiting regarding cross-container access unless they are explicitly linked via the `depends_on` key.