* feat: add Node.js version 25 to CI workflow
https://nodejs.org/en/blog/release/v25.0.0
* chore: update CI workflow to exclude Node.js version '23'
Remove Node.js version '23' from CI workflow.
These object-mode rows don’t include a `length`. Dead code since 721cf56eb331bd35243c1425095b98cf09adf814 (“Rows are now associative arrays rather than straight arrays.”)?
* Make active query a private prop
* Make client.queryQueue private (with deprecation)
* Deprecate some legacy features
* Update packages/pg/lib/client.js
Co-authored-by: Charmander <~@charmander.me>
---------
Co-authored-by: Brian Carlson <brian.carlson@getcruise.com>
Co-authored-by: Charmander <~@charmander.me>
* feat(pg-connection-string): warn if non-standard ssl options are used
In preparation for v3.0.0, we start warning users to be explicit about
the sslmode they want.
* Update index.js
* fix(pg-protocol): specify number of result column format codes
Fixes a bug when binary format. We must specify both:
- the number of result column format codes
- the result column format codes
The text format case was working by accident. When using text format, the
intention was to set the format code to 0. Instead, we set the number
of result column format codes was set to 0. This is valid because it indicates
that all result columns should use the default format (text).
When using binary format, the intention was to set the format code to 1.
Instead, we set the number of result column format codes to 1.
Importantly, we never set a result column format code. This caused an
error: 'insufficient data left in message'.
We now always set the number of result column format codes to '1'. The
value of '1' has special meaning:
> or one, in which case the specified format code is applied to all result columns (if any)
We then set a single column format code based on whether the connection
(or query) is set to binary.
Fixes#3487
* fix(pg): use a Buffer when parsing binary
The call to parseArray was not working as expected because the value was
being sent as a string instead of a Buffer. The binary parsers in
pg-types all assume the incoming value is a Buffer.
* Pass callback to client.end
* Add test for pool.end method
* fix: remove excessive _pulseQueue call
* fix: context problem
* fix: test resolve should be called when the last client is removed
* fix: wait for pool.end()
Because when you don't pass a callback to .end() it always returns a promise
* fix: handle idle timeout test data race
---------
Co-authored-by: Asadbek Raimov <asadbekraimov642@gmail.com>
* change instanceof to isDate
* use both methods to check for valid Date
* add test for PR 2862
* use only isDate(date) in place of instanceof Date
* Extend compatibility of `isDate` use back to Node 8
* Clean up test
---------
Co-authored-by: Charmander <~@charmander.me>
Reviewed-by: Charmander <~@charmander.me>
* Update docs - start
* Add logo & discord
* Start updating docs for esm style imports
* Update docs with logo & info on pooling
* Update more import statements
---------
Co-authored-by: Brian Carlson <brian.carlson@getcruise.com>
Prior to v2.8.0, the parse function was the default when using import.
When esm compatibility was introduced in v2.8.0, there was not default
specified. This broke existing code that relied on that default.
Fixes#3424
Allows user to change the semantics of `sslmode` to be as close as possible to libpq semantics. The opt in can be enabled using `useLibpqCompat` parsing option or the non-standard `uselibpqcompat` query string parameter.
---------
Co-authored-by: Charmander <~@charmander.me>
Co-authored-by: Herman J. Radtke III <herman@hermanradtke.com>
This reverts commit dcb4257898d1d8d37110a4364922206dad33f9fe.
The change doesn’t fix the bug it claims to (`finally` always runs) and introduces a resource leak if the `ROLLBACK` query fails. The related bug that a broken client can be returned to the pool remains unaffected either way.
* Added support for SCRAM-SHA-256-PLUS i.e. channel binding
* Requested tweaks to channel binding
* Additional tweaks to channel binding
* Fixed lint complaints
* Update packages/pg/lib/crypto/sasl.js
Co-authored-by: Charmander <~@charmander.me>
* Update packages/pg/lib/crypto/sasl.js
Co-authored-by: Charmander <~@charmander.me>
* Update packages/pg/lib/client.js
Co-authored-by: Charmander <~@charmander.me>
* Tweaks to channel binding
* Now using homegrown certificate signature algorithm identification
* Update ssl.mdx with channel binding changes
* Allow for config object being undefined when assigning enableChannelBinding
* Fixed a test failing on an updated error message
* Removed - from hash names like SHA-256 for legacy crypto (Node 14 and below)
* Removed packageManager key from package.json
* Added some SASL/channel binding unit tests
* Added a unit test for continueSession to check expected SASL session data
* Modify tests: don't require channel binding (which cannot then work) if not using SSL
---------
Co-authored-by: Charmander <~@charmander.me>
It looks like this was removed in d615ebee177ed57c7a7df861b1db675c9e0ebb0f while it still had references to it.
Reviewed-by: Charmander <~@charmander.me>
* fix(devcontainer): upgrade node to version 20
* chore: since Windows and Linux use different default line endings, Git may report a large number of modified files that have no differences aside from their line endings.
* test: Actually test split messages in split message parsing test
* cleanup: Fix spelling in tests
* test: Wait on asynchronous tests
* cleanup: Remove unused parameter from test method `BufferList#getByteLength`
If someone did want this functionality, it would be better to use addition separate from the method anyway.
* cleanup: Remove unused test function `BufferList.concat`
* perf(utils): fast prepareValue
This PR add a performance improvements at prepare Value for non-object by skipping useless condition
* fix: lint
* fix: case of undefined
* fix: review
* Handle bad message ordering - make it catchable. Fixes 3174
* Close client in test
* Mess w/ github action settings
* update ci config
* Remove redundant tests
* Update code to use handle error event
* Add tests for commandComplete message being out of order
* Lint fix
* Fix native tests
* Fix lint again...airport computer not my friend
* Not a native issue
Inserted a reminder to install libpq-dev before running `yarn lerna
bootstrap`.
It's pretty straightforward in light of the error messages, but I cloned
the repo twice and failed both times.
* Remove assert from globals
* Remove Client from globals
* Remove global test function
* Remove MemoryStream from globals
* Require assert in SASL integration tests
* Attempt to use a postgres with ssl?
* Use latest image
* Remove connection tests - they test internals that are better covered by testint the client
* refactor: tighten up cloudflare detection
The previous approach to detecting whether to use Cloudflare's sockets was to check for missing polyfills.
But as we improve the polyfills that Wrangler can provide these checks are no longer valid.
Now we just try to use the Cloudflare API first and fallback to Node.js if those are not available.
* fixup! refactor: tighten up cloudflare detection
When enabling this rule, it's recommended to also *disable* the standard `no-unused-vars` rule. Although `no-unused-vars` is not currently enabled, it seems helpful to explicitly disable it here.
See: https://typescript-eslint.io/rules/no-unused-vars/
Co-authored-by: alxndrsn <alxndrsn>
This feature can be used as follows:
```
client.query({ text: 'SELECT 1', queryMode: 'extended' })
```
This will force the query to be sent with parse/bind/execute even when it has no parameters and disallows multiple statements being executed. This can be useful in scenarios where you want to enforce more security & help prevent sql injection attacks...particularly by library authors.
---------
Co-authored-by: alxndrsn <alxndrsn>
Co-authored-by: Brian Carlson <brian.m.carlson@gmail.com>
I didn't do much to "modernize" the pg-native codebase other than running it through the standard eslint --fix that is applied to the rest of the code. There's some easy opportunities there to update it to es6 and so on...it still uses some pretty antiquated coding styles in places. This PR re-introduces the native tests on node v20, and updates test matrix to drop unsupported versions of node & add in node v22.
buffer-writer was replaced with pg-protocol in 3ff91eaa3222657fd51ea463b8086d134a505404, and packet-reader in 520bd3531990f32c3e00b20020c67f6ac6c70261.
* Cursor: avoid closing connection twice if error received after destroy()
Includes test case from @nathanjcochran
* Resolve lint violations
* revert fix to check tests fail without it
* Re-introdce fix
This reverts commit 5f5d42a071e40f8851035dba182642937dd35664.
---------
Co-authored-by: alxndrsn <alxndrsn>
This allows all matrix builds to run, making it easier to understand if a given
failure is specific to nodejs version(s), or more general.
Co-authored-by: alxndrsn <alxndrsn>
This should highlight when the yarn.lock file is out of sync with package.json.
From the docs:
> If you need reproducible dependencies, which is usually the case with the continuous integration systems, you should pass --frozen-lockfile flag.
> - https://classic.yarnpkg.com/lang/en/docs/cli/install/
* Remove unused travis CI config
* Bump eslint and friends
* Fix lint errors after eslint upgrade
* Remove windows and macos from CI workflow as they are actually running linux
Removes the windows and macos matrix from the CI workflow as they were never actually setting
the OS. Both were running against the "ubuntu-latest" OS. Trying to actually use them would
not work either as neither windows or macos is supported for service containers. A different
means will be needed to test on those platforms. Until that's done, this removes those from
the matrix as we were simply running the same thing 3x for the same node versions.
Previously, if you attempted to pass an array of `Uint8Array` objects to
a prepared statement, it would render each literal numeric value of that
array.
Since `Uint8Array` (and `TypedArray` types) represent views over raw
bytes, ensure these are serialized to Postgres as a byte representation.
* Fail gracefully when connecting to other SGDB vendor
* Make test more flexible. Adjust error wording to match native better.
---------
Co-authored-by: Brian Carlson <brian.m.carlson@gmail.com>
I am running this package using electron, what i noticed was that due to
the fact that the lines between node and browser environments become a
bit blurred, the URL class that was being used was the one defined by
the browser and not node. By making an explicit require it ensures the
correct Class is used.
While creating a test for this would be difficuilt i think adding an
eslint rule to stop using globally defined objects and require imports
instead would resolve issues like this in the future
* Add failing test for result rows with the same column names
* Fix handling of duplicate column names in results to ensure last value is populated
Fixes handling of result rows that have the same column name duplicated in the results to ensure
that the last value is the one returned to the user. This was the old behavior but unintentionally
broken when the pre-built object optimization was added.
* Add property usePrebuiltEmptyResultObjects to Query constructor which generates pre-shaped result rows
* Remove option and test for prebuiltEmptyResultObject
* Remove errorneously added newline
* Move all logic for prebuilding objects to Result
* Move prebuilding to addFields
* Use a clone as clone-base
---------
Co-authored-by: HZ111 / Dev2 <hz111@wielick.nl>
If the connection string is something like:
postgresql://demo:password@/postgres?host=localhost&port=26258
Then the port from the query parameters should be used. Previously, the
parsing function would end up with a null port, and the default port
would end up being used by the connecetion package.
* fix stack traces of query() to include the async context (#1762)
* rename tests so they are actually run
* conditionally only run async stack trace tests on node 16+
* add stack trace to pg-native
---------
Co-authored-by: Charmander <~@charmander.me>
By implementing package.json `exports` we can avoid processing the Cloudflare
specific code, which contains `import ... from "cloudflare:sockets"`, in bundlers such
as Webpack.
If you are bundling for a Worker environment using Webpack then you need to add the
`workerd` condition and ignore `cloudflare:sockets` imports:
**webpack.config.js**
```js
resolve: { conditionNames: ["require", "node", "workerd"] },
plugins: [
new webpack.IgnorePlugin({
resourceRegExp: /^cloudflare:sockets$/,
}),
],
```
* Document client.escapeIdentifier and client.escapeLiteral
Per #1978 it seems that these client APIs are undocumented. Added documentation for these functions along with some examples and relevant links.
* Fix typos in new docs
* Migrate escapeIdentifier and escapeLiteral from Client to PG
These are standalone utility functions, they do not need a client instance to function.
Changes made:
- Refactored escapeIdentifer and escapeLiteral from client class to functions in utils
- Update PG to export escapeIdentifier and escapeLiteral
- Migrated tests for Client.escapeIdentifier and Client.escapeLiteral to tests for utils
- Updated documentation, added a "utilities" page where these helpers are discussed
**note** this is a breaking change. Users who used these functions (previously undocumented) on instances of Client, or via Client.prototype.
* Export escapeIdentifier and escapeLiteral from PG
These are standalone utility functions, they should not depend on a client instance.
Changes made:
- Refactored escapeIdentifer and escapeLiteral from client class to functions in utils
- Re-exported functions on client for backwards compatibility
- Update PG to export escapeIdentifier and escapeLiteral
- Updated tests to validate the newly exported functions from both entry points
- Updated documentation, added a "utilities" page where these helpers are discussed
* Ensure escape functions work via Client.prototype
Updated changes such that escapeIdentifier and escapeLiteral are usable via the client prototype
Updated tests to check for both entry points in client
`result.rowCount` is initialized to `null`, but only set to an `int`-value if the returned command tag consists of more than one word, which is not the case in general.
For example, the `LOCK` command will return a command tag of simply the form `LOCK`, and thus `result.rowCount` will stay `null`.
* fix: double client.end() hang
fixes https://github.com/brianc/node-postgres/issues/2716
`client.end()` will resolve early if the connection is already dead,
rather than waiting for an "end" event that will never arrive.
* fix: client.end() resolves when socket is fully closed
* Enable SASL tests in GitHub actions CI
* Add SASL test to ensure that client password is a string
* Fix SASL error handling to emit and bubble up errors
* Add informative error when SASL password is empty string
This changeset enables declaring the `stream` config value as a factory
method. Providing a much more flexible control of the socket connection.
Defining a custom `stream` config value allows the postgres driver to
support a larger variety of environments/setups such as proxy servers
and secure socket connections that are used by cloud providers such as
GCP.
Currently, usage of the `stream` config value is only viable for single
connections given that it's only possible to define a single socket
stream instance per new Client/Pool instance. By adding support to a
factory function, it becomes possible to enable usage of custom socket
streams for connection pools.
For reference, see the `mysql2` driver for MySQL (linked below) for
prior art example of this pattern.
Refs: ba15fe2570/lib/connection.js (L63-L65)
Refs: https://cloud.google.com/sql/docs/postgres/connect-overview
Signed-off-by: Ruy Adorno <ruyadorno@google.com>
Signed-off-by: Ruy Adorno <ruyadorno@google.com>
* Fix error handling test
#2569 introduced a bug in the test. The test never passed but because travis-ci lovingly broke the integration we had a long time ago the tests weren't run in CI until I merged. So, this fixes the tests & does a better job cleaning up the query in an errored state.
* Update sponsors
This is the initial port to github actions. Still pending are the SSL and client SSL cert tests which are currently being skipped. But perfect is the enemy of the good here, and having no CI because travis-ci keeps not working is unacceptable.
Based on the suggestion from #2078. This adds ref/unref methods to the
Connection and Client classes and then uses them to allow the process to
exit if all of the connections in the pool are idle. This behavior is
controlled by the allowExitOnIdle flag to the Pool constructor; it defaults
to the old behavior.
* Add similar promise variables to read() and close() as seen in query()
* Add testing for promise specific usage
* Simplify tests as no real callbacks are involved
Removes usage of `done()` since we can end the test when we exit the function
Co-Authored-By: Charmander <~@charmander.me>
* Switch to let over var
Co-authored-by: Charmander <~@charmander.me>
Noticed that options.max is compared against client count directly, but there's a method wrapping it. I can't see any reason to duplicate it? And using _isFull means I can override that for the adaptive pooling idea I'm exploring :)
* Turn Cursor into an ES6 class
* Fix incorrect syntax in Cursor.end()
* Remove extraneous empty line
* Revert es6 change for end()
* Revert back to defining the end() method inside the class
* Use hanging indent to satisfy Prettier
It’s missing `co.wrap`, so it doesn’t actually run (Mocha does nothing with the paused generator). The test group that follows it, “using an ended pool”, covers the same thing anyway.
* pg: Re-export DatabaseError from 'pg-protocol'
Before, users would have to import DatabaseError from 'pg-protocol'. If
there are multiple versions of 'pg-protocol', you might end up using the
wrong one.
Closes#2378
* Update error-handling-tests.js
* Update query-error-handling-tests.js
Co-authored-by: Brian C <brian.m.carlson@gmail.com>
* Adding pg to peerDependencies
Yarn2 requires strict imports, I.E. all project dependencies need to exist in that project's package.json.
* pg version should be locked on the major version
Co-authored-by: Charmander <~@charmander.me>
Co-authored-by: Charmander <~@charmander.me>
* Make tests pass in github codespaces
There were a few tests which didn't specify a host or port which wasn't working well inside the codespaces docker environment. Added host & port where required. Also noticed one test wasn't actually _testing_, it was just `console.log`-ing its output, so I added proper assertions there. Finally set `PGTESTNOSSL: true` in the codespaces environment until I can get the postgres docker container configured w/ SSL...which I will do l8r.
* lint
* Add sha256 SASL helper
* Rename internal createHMAC(...) to hmacSha256(...)
* Add parseAttributePairs(...) helper for SASL
* Tighten arg checks in SASL xorBuffers(...)
* Add SASL nonce check for printable chars
* Add SASL server salt and server signature base64 validation
* Add check for non-empty SASL server nonce
* Rename SASL helper to parseServerFirstMessage(...)
* Add parameter validation to SASL continueSession(...)
* Split out SASL final message parsing into parseServerFinalMessage(...)
* Fix SCRAM tests
Removes custom assert.throws(...) so that the real one from the assert package is used and
fixes the SCRAM tests to reflect the updated error messages and actual checking of errors.
Previously the custom assert.throws(...) was ignoring the error signature validation.
This is fixing a double readyForQuery message being sent from the backend (because we were calling sync after an error, which I already fixed in the main driver). Also closes#2333
Move from 3 loops (prepareValue, check for buffers, write param types, write param values) to a single loop. This speeds up the insert benchmark by around 100 queries per second. Performance improvement depends on number of parameters being bound.
* `password` already has this set, but was a little long considering we only want to override default of one property
* `ssl.key` was showing up in tracebacks
Replaces __dirname concatentation in pg test scripts so that editors like
VS Code can automatically generate typings and support code navigation (F12).
This switches the internals to use faster protocol parsing & serializing. This results in a significant (30% - 50%) speed up in some common query patterns. There is quite a bit more performance work I need to do, but this takes care of some initial stuff & removes a big fork in the code.
The `readyState` of a newly-created `net.Socket` changed from `'closed'` to `'open'` in Node 14.0.0, so this makes the JS driver work on Node 14.
`Connection` now always calls `connect` on its `stream` when `connect` is called on it.
* Move message writing to typescript lib
* Write more tests, cleanup code to some extent
* Rename package to something more representing its name
* Remove unused code
* Small tweaks based on microbenchmarks
* Rename w/o underscore
* Change from transform stream
* Yeah a thing
* Make tests pass, add new code to travis
* Update 'best' benchmarks and include tsc in pretest script
* Need to add build early so we can create test tables
* logging
* Drop support for EOL versions of node (#2062)
* Drop support for EOL versions of node
* Re-add testing for node@8.x
* Revert changes to .travis.yml
* Update packages/pg-pool/package.json
Co-Authored-By: Charmander <~@charmander.me>
Co-authored-by: Charmander <~@charmander.me>
* Remove password from stringified outputs (#2066)
* Remove password from stringified outputs
Theres a security concern where if you're not careful and you include your client or pool instance in console.log or stack traces it might include the database password. To widen the pit of success I'm making that field non-enumerable. You can still get at it...it just wont show up "by accident" when you're logging things now.
The backwards compatiblity impact of this is very small, but it is still technically somewhat an API change so...8.0.
* Implement feedback
* Fix more whitespace the autoformatter changed
* Simplify code a bit
* Remove password from stringified outputs (#2070)
* Keep ConnectionParameters’s password property writable
`Client` writes to it when `password` is a function.
* Avoid creating password property on pool options
when it didn’t exist previously.
* Allow password option to be non-enumerable
to avoid breaking uses like `new Pool(existingPool.options)`.
* Make password property definitions consistent
in formatting and configurability.
Co-authored-by: Charmander <~@charmander.me>
* Make `native` non-enumerable (#2065)
* Make `native` non-enumerable
Making it non-enumerable means less spurious "Cannot find module"
errors in your logs when iterating over `pg` objects.
`Object.defineProperty` has been available since Node 0.12.
See https://github.com/brianc/node-postgres/issues/1894#issuecomment-543300178
* Add test for `native` enumeration
Co-authored-by: Gabe Gorelick <gabegorelick@gmail.com>
* Use class-extends to wrap Pool (#1541)
* Use class-extends to wrap Pool
* Minimize diff
* Test `BoundPool` inheritance
Co-authored-by: Charmander <~@charmander.me>
Co-authored-by: Brian C <brian.m.carlson@gmail.com>
* Continue support for creating a pg.Pool from another instance’s options (#2076)
* Add failing test for creating a `BoundPool` from another instance’s settings
* Continue support for creating a pg.Pool from another instance’s options
by dropping the requirement for the `password` property to be enumerable.
* Use user name as default database when user is non-default (#1679)
Not entirely backwards-compatible.
* Make native client password property consistent with others
i.e. configurable.
* Make notice messages not an instance of Error (#2090)
* Make notice messages not an instance of Error
Slight API cleanup to make a notice instance the same shape as it was, but not be an instance of error. This is a backwards incompatible change though I expect the impact to be minimal.
Closes#1982
* skip notice test in travis
* Pin node@13.6 for regression in async iterators
* Check and see if node 13.8 is still borked on async iterator
* Yeah, node still has changed edge case behavior on stream
* Emit notice messages on travis
* Revert "Revert "Support additional tls.connect() options (#1996)" (#2010)" (#2113)
This reverts commit 510a273ce45fb73d0355cf384e97ea695c8a5bcc.
* Fix ssl tests (#2116)
* Convert Query to an ES6 class (#2126)
The last missing `new` deprecation warning for pg 8.
Co-authored-by: Charmander <~@charmander.me>
Co-authored-by: Gabe Gorelick <gabegorelick@gmail.com>
Co-authored-by: Natalie Wolfe <natalie@lifewanted.com>
* Call callback when end called on unconnected client
Closes#2108
* Revert a bit of the change
* Use readyState because pending doesn't exist in node 8.x
* Update packages/pg/lib/client.js
use bring your own promise
Co-Authored-By: Charmander <~@charmander.me>
Co-authored-by: Charmander <~@charmander.me>
This summarizes the common forms, but omits some of the more particular,
and unnecessary forms, such as specifying UNIX domain sockets with
`pg://` URLs.
When error happens on socket, potentially dead socket is kept open indefinitely by calling "connection.end()".
Similar issue is that it keeps socket open until long-running query is finished even though the connection was ended.
Since I believe GitHub Sponsors is your preferred way to donate now, maybe you want to totally get rid of the mention of Patreon, or mention that GitHub Sponsors is preferred?
* Remove double-send of ssl request packet
I missed the fact that we are already sending this. Since I don't have good test coverage for ssl [which I am planning on fixing next](https://github.com/brianc/node-postgres/issues/2009) this got missed.
I'm forcing an SSL test on travis. This will break for me locally as I don't have SSL enabled on my local test DB. Something I will also remedy.
* Prevent requeuing a broken client
If a client is not queryable, the pool should prevent requeuing instead
of strictly enforcing errors to be propagated back to it.
* Write tests for change
* Use node 13.6 in travis
Some weird behavior changed w/ async iteration in node 13.7...I'm not sure if this was an unintentional break or not but it definitely diverges in behavior from node 12 and earlier versions in node 13...so for now going to run tests on 13.6 to unblock the tests from running while I track this down.
* Update packages/pg-pool/test/releasing-clients.js
Co-Authored-By: Charmander <~@charmander.me>
* Update .travis.yml
Co-authored-by: Johannes Würbach <johannes.wuerbach@googlemail.com>
Co-authored-by: Charmander <~@charmander.me>
* Fix tests skipped because of missing suffixes
Mocha will happen eventually!
* Skip password tests when they can’t work
Will be made more visible when tests are ported to Mocha.
* Add testing with a user with a password to CI
Should reveal a bug in the password enumerability work, I think.
* Explain new CI matrix entry for password authentication
[ci skip]
* Fix pg-query-stream
There were some subtle behaviors with the stream being implemented incorrectly & not working as expected with async iteration. I've modified the code based on #2050 and comments in #2035 to have better test coverage of async iterables and update the internals significantly to more closely match the readable stream interface.
Note: this is a __breaking__ (semver major) change to this package as the close event behavior is changed slightly, and `highWaterMark` is no longer supported. It shouldn't impact most usage, but breaking regardless.
* Remove a bunch of additional code
* Add test for destroy + error propagation
* Add failing test for destroying unsubmitted stream
* Do not throw an uncatchable error when closing an unused cursor
Working on fixing some timing issues in pg-query-stream it uncovered an issue where the cursor is firing its 'close' callback before it's actually entirely ready to dispatch another query. This change makes the close callback trigger when the connection re-enters `readyForQuery` state.
* Warn when pg.Pool() isn’t called as a constructor
in preparation for #1541. `eval` is a bit awkward, but it’s more accurate than an `instanceof` check and will work on platforms new enough to support pg 8 (i.e. only not Node 4).
* Warn when Query() isn’t called as a constructor
The message is created with a fixed name.
56b7c4168d5aa084b2821279ee88d437a2b7636d, released in pg 2.4.0, was when this became applicable and the type of a `notice` event’s argument changed to an Error.
* Exit with error code when create-test-tables fails
* Add PostgreSQL 10 to CI
and change to Ubuntu 18.04 so building OpenSSL isn’t necessary, and move old PostgreSQL version tests to the latest Node LTS.
* Add Node 13 to CI
* Preserve create-test-tables’s compatibility with PostgreSQL <9.4
* Require latest pg-pool ^2.0.7
to limit variability of next pg’s installations.
* Ignore EPIPE when writing termination message
I don’t know why this wasn’t necessary for tests to pass before…
* Fix disconnection tests for pg-pool 2.0.7
In pg-pool 2.0.7, checked-out clients became responsible for their own 'error' events.
brianc/node-pg-pool#123
* Bump version of pg-cursor
This includes fixes in pg-cursor@2.0.1. I've relaxed semver a touch so I don't have to release a new version here just for patch changes to pg-cursor.
* Pass options to pg-cursor
fixes#55
When code was added to use a random named portal instead of the empty portal to support redshift we didn't update the close messages approprately. This could result in postgres keeping locks on tables for modification if streaming and table modification was both done within a transaction. This update fixes the issue by always issuing a close command on the named portal when query is finished.
fixes#56
* Upgrade dependencies
- There was a security vuln with mocha, so upgraded mocha.
- Upgraded versions of node we're going to check in travis
- Switched to yarn from npm
- Removed --no-exit as that's standard mocha behavior now
* Enable postgres on newer version of travis
There are no tests covering cursor.end(). It's also a weird method to have on the cursor as it terminates the connection to postgres internally. I don't recall why I added this method in the first place but its not correct to close a connection to postgres by calling .end on a cursor...seems like a pretty big footgun. If you have a pooled client and you call `cursor.end` it'll close the client internally and likely confuse the pool. Plus it's just weird to be able to close a connection by calling .end on a query or cursor. So...I'm deprecating that method.
The throwOnRelease() function does not appear to be exposed anywhere,
and it does not appear to make any sense to have it as a standalone func,
as it ovecomplicates things and makes function call as non-returning. Inlined it.
* feat: Add dynamic retrieval for client password
Adds option to specify a function for a client password. When the client
is connected, if the value of password is a function then it is invoked
to get the password to use for that connection.
The function must return either a string or a Promise that resolves to
a string. If the function throws or rejects with an error then it will
be bubbled up to the client.
* test: Add testAsync() helper to Suite
Add testAsync() helper function to Suite to simplify running tests that
return a Promise. The test action is executed and if a syncronous error
is thrown then it is immediately considered failed. If the Promise resolves
successfully then the test is considered successful. If the Promise
rejects with an Error then the test is considered failed.
* test: Add tests for dynamic password
* test: Simplify testAsync error handling
* fix: Clean up dynamic password error handling and misc style
* test: Remove extra semicolons
* test: Change testAsync(...) calls to use arrow functions
* fix: Wrap self.password() invocation in an arrow function
* test: Add a comment to testAsync(...)
* Prevent double release with callback
When using the callback instead of client.release, double releasing
a client was possible causing clients to be re-added multiple times.
* Remove idleListener when client is in-use
When a client is in-use, the error handling should be done by the
consumer and not by the pool itself as this otherwise might cause
errors to be handled multiple times.
* Handle verify failures
* Fix deepEqual compare
In node 12 assert.deepEqual against a buffer & array no longer passes if the values are the same. This makes sense. Updated the test to not use deepEqual in this case.
* Fix typo
* Enable eslint:recommended and remove unused eslint plugins
Enables eslint:recommended by disabling the options that would not pass. Also removes
dependencies for included but unused eslint plugins.
* Convert console.error(...) calls to use %s placeholders
* Enable eslint no-console rule
* Add and enable eslint-node-plugin
* Correct typo
* Enable eslint no-unused-vars
* Added the missing connect_timeout and keepalives_idle config parameters
* Implementation and tests for keepAliveInitialDelayMillis and connectionTimeoutMillis [squashed 4]
Adds a try/catch block around the prepareValue(...) invocations in query.prepare(...)
to ensure that any that throw an error are caught and bubbled up to the caller.
The documentation states that you can pass custom type processors to
query objects. See:
https://node-postgres.com/features/queries#types
This didn't actually work. This commit adds an initial implementation
of per-query type-parsing. Caveats:
* It does not work with pg-native. That would require a separate pull
request to pg-native, and a rather significant change to how that
library handles query results.
* Per-query types do not "inherit" from types assigned to the Client,
ala TypeOverrides.
* Test queued checkout after a connection failure
Co-authored-by: Johannes Würbach <johannes.wuerbach@googlemail.com>
* Fix queued checkout after a connection failure
Co-authored-by: Johannes Würbach <johannes.wuerbach@googlemail.com>
* Add read_timeout to connection settings
* Fix uncaught error issue
* Fix lint
* Fix "queryCallback is not a function"
* Added test and fixed error returning
* Added query timeout to native client
* Added test for timeout not reached
* Ensure error is the correct one
Correct test name
* Removed dubious check
* Added new test
* Improved test
Fixes the deprecation warning for using `new Buffer`.
The change is semver major in buffer-writer since we dropped support for node < 4.x, but otherwise it's a non-breaking change. Since node-postgres already requires node >= 4.x it's fine.
* Add tests for query callbacks after connection-level errors
* Ensure callbacks are executed for all queued queries after connection-level errors
Separates socket errors from error messages, sends socket errors to all queries in the queue, marks clients as unusable after socket errors.
This is not very pleasant but should maintain backwards compatibility…?
* Always call `handleError` asynchronously
This doesn’t match the original behaviour of the type errors, but it’s correct.
* Fix return value of `Client.prototype.query` in immediate error cases
* Mark clients with closed connections as unusable consistently
* Add tests for error event when connecting Client
* Ensure the promise and callback versions of Client#connect always have the same behaviour
* Give same error to queued queries as to active query when ending
and do so in the native Client as well.
* Restore original ordering between queued query callbacks and 'end' event
Defaults to built-in `tls.checkServerIdentity` method in the event one is not passed into `pgConfig.ssl`
Found breaking in v9.4.2 vs v9.4.1 a la 49054717b4ec0c6d477f04c2becd1f9680b2d13a
cc @tobio @brianc
Existing log code was outputting 'connecting new client' twice and saying 'new client connected', creating a false impression when an error (like a timeout) was present.
This happened to work before because `Query.portalName` was undefined,
but in order to be able to set the portal explicitly it should be using
`Query.portal`.
This commit adds some finer grained detail to handling the postmaster's
response to SSL negotiation packets, by accounting for the possibility
of an 'E' byte being sent back, and emitting an appropriate error.
In the naive case, the postmaster will respond with either 'S' (proceed
with an SSL connection) or 'N' (SSL is not supported). However, the
current if statement doesn't account for an 'E' byte being returned
by the postmaster, where an error is encountered (perhaps unable to
fork due to being out of memory).
By adding this case, we can prevent confusing error messages when SSL is
enforced and the postmaster returns an error after successful SSL
connections.
This also brings the connection handling further in line with
libpq, where 'E' is handled similarly as of this commit:
a49fbaaf8d
Given that there are no longer pre-7.0 databases out in the wild, I
believe this is a safe change to make, and should not break backwards
compatibility (unless matching on error message content).
* Replace if statement with switch, to catch 'S', 'E' and 'N' bytes
returned by the postmaster
* Return an Error for non 'S' or 'N' cases
* Expand and restructure unit tests for SSL negotiation packets
The support for the `noAssert` argument dropped in the upcoming
Node.js v.10.x. All input is then validated no matter if this
is set to true or not. Just remove the argument because of that.
PostgreSQL 10 reports its version as only `major.minor`, so it can’t be parsed with semver. The `server_version_num` setting is a major version followed by a four-digit minor version since version 10, and was a three-digit major version followed by a two-digit minor version before that.
Changes the Makefile lint target to use the local copy of
eslint that would be installed in node_modules rather than
a global copy that may not be installed.
Centralize logic for md5 hashing of passwords for authentication. Adds
a new function postgresMd5PasswordHash(user, password, salt) to utils
and updates client.js and tests to use it.
* Add failing test for idle timer continuation after removal
* Clear idle timeout only for removed client
* Copy list of idle clients for modification during iteration
* Add connection & query timeout if all clients are checked out
This addresses [pg#1390](https://github.com/brianc/node-postgres/issues/1390).
Ensure connection timeout applies both for new connections and on an exhuasted pool. I also made the library return an error when passing a function as the first param to `pool.query` - previosuly this threw a sync type error.
* Add pg-cursor to dev deps
Close is used to release a named portal (which isn't used by pg-cursor) or when you're early-terminating a cursor on the unnamed portal. Sending 'close' on an connection which has already sent 'readyForQuery' results in the connection responding with a _second_ 'readyForQuery' which causes a lot of issues within node-postgres as 'readyForQuery' is the signal to indicate the client has gone back into the idle state.
* Work on converting lib to standard
* Finish updating lib
* Finish linting lib
* Format test files
* Add .eslintrc with standard format
* Supply full path to eslint bin
* Move lint command to package.json
* Add eslint as dev dependency
This is a small change and is _kinda_ backwards compatible since the old behavior was to throw an error, but if someone was relying on anything with `.map` working as values it would break them, so it's in a major semver bump.
Clients are not reusable. This changes the client to raise errors whenever you try to reconnect a client that's already been used. They're cheap to create: just instantiate a new one (or use the pool) 😉.
Closes#1352
* Initial work
* Make progress on custom pool
* Make all original tests pass
* Fix test race
* Fix test when DNS is missing
* Test more error conditions
* Add test for byop
* Add BYOP tests for errors
* Add test for idle client error expunging
* Fix typo
* Replace var with const/let
* Remove var usage
* Fix linting
* Work on connection timeout
* Work on error condition tests
* Remove logging
* Add connection timeout
* Add idle timeout
* Test for returning to client to pool after error
fixes#48
* Add idleTimeout support to native client
* Add pg as peer dependency
fixes#45
* Rename properties
* Fix lint
* use strict
* Add draining to pool.end
* Ensure ending pools drain properly
* Remove yarn.lock
* Remove object-assign
* Remove node 8
* Remove closure for waiter construction
* Ensure client.connect is never sync
* Fix lint
* Change to es6 class
* Code cleanup & lint fixes
Before this commit, when someone tried to insert a Buffer into an array,
the library would try to escape it (by calling the `escapeElement` on
it), which would fail because buffers don't have a `replace` method.
This adds deprecations in preparation for `pg@7.0`
- deprecate using event emitters on automatically created results from client.query.
- deprecate query.promise() - it should never have been a public method and it's not documented. I need to do better about using _ prefix on private methods in the future.
- deprecate singleton pool on the `pg` object. `pg.connect`, `pg.end`, and `pg.cancel`.
Removes the default node 6 / PG 9.6 combination and adds it to the
Travis-CI matrix of combinations. That way all combinations are
defined in a single place.
* Add client connectionString tests (#1310)
* Remove redundant tests
* Add client connectionString test
Add test to ensure { connectionString } is respected as an argument to the client constructor
* Add test for connection string property
Also fixed some legacy require statements.
* Normalize native error properties
Map native error properties to the same property names we use for errors from the JS driver.
Fixes#972Fixes#938
* Remove redundant tests
* Add client connectionString test
Add test to ensure { connectionString } is respected as an argument to the client constructor
* Add test for connection string property
Also fixed some legacy require statements.
* Remove unsupported Node versions 0.10 and 0.12 from CI
* Replace deprecated Buffer constructor with .from/.alloc
* Remove Promise polyfill
* Make use of Object.assign
* Remove checks for versions of Node earlier than 4
* Remove Buffer#indexOf fallback for Node 0.10
* Going red: using a config object creates two pools when missing some params
It should only create a pool in a consistent way, even if some params
are not provided in the first place.
* Delay the pool name generation to make it consistent between calls
* Don't fallback to empty object as config is already defined
* Fix escaping of libpq connection string properties
Fix handlings of libpq connection properties to properly escape single
quotes and backslashes. Previously the values were surrounded in single
quotes which handled whitespace within the property value, but internal
single quotes and backslashes would cause invalid connection strings to
be generated.
* Update expected output in test to be quoted
Update the expect host output in the connection parameter test
to expect it to be surrounded by single quotes.
* Add test for configs with quotes and backslashes
* Improve Readme.md for not so advanced users
1. Add brief description about the 3 possible ways of executing queries: passing the query to a pool, borrowing a client from a pool or obtaining an exclusive client. Give examples for the 3 of them.
2. Use the examples to teach how to reuse a pool in all of your project. This should be helpful for not so advanced users and prevents mistakes.
3. Open a troubleshooting section.
* Shrink Troubleshooting and Point to Examples
1. Troubleshooting/FAQ section will only contain a reference to the wiki FAQ. I've already moved the content to the wiki.
2. At the end of "Pooling example" point to the wiki example page. Also indicate that there they can find how to use node-postgres with promises and async/await. I've already created that content in the wiki.
On Windows, after a connect attempt has failed, an error event with
ECONNRESET is emitted after the real connect error is propagated to the
connect callback, because the connection is not in ending state
(connection._ending) where ECONNRESET is ignored. This change ends the
connection when connect has failed.
This fixes#746.
* Avoid infinite loop on malformed message
If handling of such messages is deemed unimportant, `indexOf` is still faster (~40%) and cleaner than a manual loop.
Addresses #1048 to an extent.
* Use indexOf fallback for Node ≤0.12
They were disabled by 4cdd7a116bab03c6096ab1af8f0f522b04993768 without comment; it seems that this might have been unintentional?
In any case, they should probably be enabled, updated, or removed.
* Use container-based CI
* Remove unnecessary CI configuration
* Use Node 6/PostgreSQL 9.6 as default test
… rather than testing 0.10 twice with unspecified PostgreSQL.
* Use `precise` for PostgreSQL 9.1
According to https://docs.travis-ci.com/user/database-setup/, 9.1 isn’t supported on trusty.
* Fix Node 0.10 and 0.12 CI builds
These binaries appear to have been built using g++ with flags that clang doesn’t support. Or something.
* Revert "When connection fail, emit the error. (#28)"
This reverts commit 6a7edabc22e36db7386c97ee93f08f957364f37d.
The callback passed to `Pool.prototype.connect` should be responsible for handling connection errors. The `error` event is documented to be:
> Emitted whenever an idle client in the pool encounters an error.
This isn’t the case of an idle client in the pool; it never makes it into the pool.
It also breaks tests on pg’s master because of nonspecific dependencies.
* Don’t create promises when callbacks are provided
It’s incorrect to do so. One consequence is that a rejected promise will be unhandled, which is currently annoying, but also dangerous in the future:
> DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
The way callbacks are used currently also causes #24 (hiding of errors thrown synchronously from the callback). One fix for that would be to call them asynchronously from inside the `new Promise()` executor:
process.nextTick(cb, error);
I don’t think it’s worth implementing, though, since it would still be backwards-incompatible – just less obvious about it.
Also fixes a bug where the `Pool.prototype.connect` callback would be called twice if there was an error.
* Use Node-0.10-compatible `process.nextTick`
* When connection fail, emit the error.
If client connect failed, emit the connection error rather than swallowing it.
* Add test for connection error.
* Pool.query calls cb if connect() fails
Old behavior was that if connect called back with an error, the promise would get rejected but the cb function would never get called.
* Test that Pool.query passes connection errors to callback
* Fixes to standardjs compliance
Setting PostgreSQL 9.5 as the main version to test against.
NOTE: The following settings are required for 9.5 to work:
```
sudo: required
dist: trusty
```
A long standing bug was the pure JS client didn't accept or call a callback on `client.end`. This is inconsistent with both the documentation & general node patterns.
This fixes the issue & adds a test. The issue did not exist in the native version of the client.
* fail: "connect" event only on success
Double callback invocation will also cause this to fail.
* avoid double callback: _create
If `client.connect` returns an error, then the callback for `Pool#_create` is only invoked once. Also the `connect` event is only emitted on a successful connection, the client is otherwise rather useless.
* legacy compat; don't use Object.assign
* legacy compat; events.EventEmitter
* Support function-like construction
Remove the necessity of using `new` in front of the `Pool`, i.e. allow it to be used as a regular function. This is following https://github.com/brianc/node-postgres/issues/1077
* add Pool factory test
The promise adapter I had implemented wasn't spec compliant: it didn't accept both `onSuccess` and `onFailure` in the call to `query#then`. This subtly broke yield & async/await because they both rely on `onError` being passed into `Promise#then`. The pool was also not returning the promise after a client was acquired, which broke awaiting on `pool.connect` - this is also fixed now.
* Initial work on removing internal pool
* Port backwards-compabible properties
* Cleanup test execution & makefile cruft
* Attempt to fix flakey error test
* Make Query & NativeQuery implement the promise interface
* Fix test
* Use older node API for checking listener length
* Do not test for promises on node@v0.10.0
When user provides a knex select statement with an undefined options in where clause it is not properly handled an give an ambiguous error message telling `Unhandled rejection TypeError: Cannot read property 'toString' of undefined.` This PR will helpful to users at it will tell them the exact problem.
* Change test matrix in .travis.yml
Add tests for node @ v6. Remove node & postgres test permutations for older versions of node.
* Remove sub-versions
* Remove minor version from node 4
There was some nasty global-ish variable reference updating happening when the native module 'initializes' after its require with `require('pg').native`
This fixes the issue by making sure both `require('pg')` and `require('pg').native` each initialize their own context in isolation and no weird global-ish references are used & subsequently stomped on.
I noticed that query cancellation was not working when connecting through pgbouncer,
even though it worked fine when directly connected. This is because we're appending an
extra null byte, and pgbouncer is strict about the packet length.
(per http://www.postgresql.org/docs/9.1/static/protocol-message-formats.html)
This removes the extraneous byte, which fixes cancellation against pgbouncer.
One of the tests was failing because it was testing that when a stream became readable it never returned a `null` datum on call to `stream.read()`.
In fact, when a readable stream drains it should & does return `null` for calls to `stream.read()` as described [here](https://nodejs.org/api/stream.html#stream_event_readable)
I updated the test to account for this, and they pass now.
Add a matrix configuration to .travis.yml to test permutations of node and PostgreSQL.
Node versions tested against are v0.10, v0.11, v0.12 and io.js v1.
PostgreSQL versions tested against are 9.1, 9.2, 9.3, and 9.4.
When a __prepared statement__ has no body in the query the backend responds with an `emptyQuery` message but never with a `commandComplete` or `errorResponse` message. The client was hanging forever waiting for one of the other two expected messages. The server was hanging forever waiting for the client to respond with a `sync` message. This change has the client send the required `sync` on receipt of an `emptyQuery` message when the query is a prepared statement. Fixes#822
Postgres has a 63 character limit on query names. To avoid potential footgunning of users we'll log to their stderr if they use a longer query name.
For reference: https://github.com/brianc/node-postgres/pull/772
The code is failing with pg-copy-streams. The pg-copy-streams creates a query object but it doesn't have any _result. Make the type parser an optional only when _result object available on query then only set the type parser.
Fixes a case where you connect & immediatley disconnect and _sometimes_ your notification subscription starts the reader on the libpq instance after the libpq instance is disconnected. This segfaults the app.
This completes the port from the old native bindings to the new node-pg-native bindings!
Time to build in support for older versions of postgres & start the pull request process.
Adding this would be consistent since you list node-any-db and sql-bricks. I chose a position in a list assuming you move from more node-postgres specific to general and abstract.
Adds a check in the error listener on the client in the pool, to
prevent calling destroy on a client when it is already being
destroyed.
Without this check, if an error occurs during the ending of the
stream, such as a timeout, the client is never removed from
the pool and weird things happen.
Postgres generally does not emit a SELECT tag after a SELECT query, but
it does emit that tag after a CREATE TABLE x AS SELECT query.
Example:
postgres=# create table t as select 1;
SELECT 1
End statements with semicolons, to be consistent with the surrounding
code.
Added a new unit test to ensure environment variables are honored when
parsing a
connection string.
Added a TODO to cleanup a test that emits messages using console.log().
Correct a query's syntax. Looks like a good thing to do even though the
syntax
doesn't matter in mocked out tests.
Removed a test that tests for SELECT tags; AFAIK, SELECT commands don't
emit a
tag.
Improve the code and clarity of unit tests in escape-tests.js. And
removed the related integration tests since it has been demonstrated in
the unit tests that a connection is not needed for escaping the literals
and identifiers.
a fix was provided in 5079c1e0c41f431ac2e02c40ebd875d8fbb34004;
test is modeled on query-error-handling-tests.js;
test both kill query and disconnection on prepared statement execution;
make connection error string message consistent between native and non-native;
disable test server-side kill for native as it hangs;
sync can cause error to be emitted so we catch that;
we also move _ending state before _send is called.
- appears that timestamp queries emit a lot of `rows` with length == 0
- `self.once('end')` is added each of these times
- assertion on listener count shows that more than 10 listeners are applied
Attempt to call a `toPostgres` method on objects passed as query values
before converting them to JSON. This allows custom types to convert
themselves to the appropriate PostgreSQL literal.
This strategy is fully backwards-compatible and uses the same pattern as
the `toJSON` override.
`arrayString` duplicated too much of `prepareValue`'s logic, and so
didn't receive bugfixes for handling dates with timestamps. Defer to
`prepareValue` whenever possible.
This change enforces double-quote escaping of all array elements,
regardless of whether escaping is necessary. This has the side-effect of
properly escaping JSON arrays.
Before the change, it would crash with a very unhelpful error message:
[project path]/node_modules/pg/lib/query.js:92
connection.sync();
^
TypeError: Cannot call method 'sync' of undefined
at Query.handleError ([project path]/node_modules/pg/lib/query.js:92:16)
at Client.connect ([project path]/node_modules/pg/lib/client.js:178:24)
at g (events.js:185:14)
at EventEmitter.emit (events.js:85:17)
at Socket.<anonymous> ([project path]/node_modules/pg/lib/connection.js:60:10)
at Socket.EventEmitter.emit (events.js:85:17)
at TCP.onread (net.js:424:51)
After the change, it reports a much more helpful
error running query [Error: Stream unexpectedly ended during query execution]
pg-cursor no longer returns the empty array 'done' signal to the callback
until the cursor recieves a readyForQuery message. This means pg-query-stream
will not emit 'close' or 'end' events until the server is __truly__ ready for
the next query. This fixes some race-conditions where some queries
are triggered off of the `end` event of the query-stream
closes#3
Consider a system where one component is scheduling tasks that yield
streams, and passing them to (unknown) clients for consumption.
It would be useful for the scheduler to know that the query
underlying the stream is completed (so it can continue on to it's
next task) without having to wait for the consumer to finish reading
all results.
Allows to conect to a specific database trough this ways:
pg.connect('/some/path database', callback);
pg.connect('socket:/some/path?db=database', callback)
pg.connect('socket:/some/path?db=database&encoding=utf8', callback)
Switching the result of all COUNT operations to a string is
a pretty nasty breaking change, and the majority of us aren't
going to be hitting numbers larger than Number.MAX_VALUE
The current connection url handling fails when the password contains
encoded special characters: After the encodeURI, the special
characters from the password are double encoded, and the password is
rejected by postgres.
Proposed fix handles one level of double encoding, and while it
might break compatibility with passwords like "asdfg%77fgh" (which
would've been escaped to asdfg%2577fgh before this patch), I
strongly feel that maintaining backwards compatibility is in this
case less important than following standards and discouraging bad
coding practices.
2013-04-11 19:57:03 +03:00
474 changed files with 36873 additions and 10102 deletions
All major and minor releases are briefly explained below.
For richer information consult the commit log on github with referenced pull requests.
We do not include break-fix version release in this file.
## pg@8.16.0
- Add support for [min connection pool size](https://github.com/brianc/node-postgres/pull/3438).
## pg@8.15.0
- Add support for [esm](https://github.com/brianc/node-postgres/pull/3423) importing. CommonJS importing is still also supported.
## pg@8.14.0
- Add support from SCRAM-SAH-256-PLUS i.e. [channel binding](https://github.com/brianc/node-postgres/pull/3356).
## pg@8.13.0
- Add ability to specify query timeout on [per-query basis](https://github.com/brianc/node-postgres/pull/3074).
## pg@8.12.0
- Add `queryMode` config option to [force use of the extended query protocol](https://github.com/brianc/node-postgres/pull/3214) on queries without any parameters.
## pg-pool@8.10.0
- Emit `release` event when client is returned to [the pool](https://github.com/brianc/node-postgres/pull/2845).
## pg@8.9.0
- Add support for [stream factory](https://github.com/brianc/node-postgres/pull/2898).
- [Better errors](https://github.com/brianc/node-postgres/pull/2901) for SASL authentication.
- [Use native crypto module](https://github.com/brianc/node-postgres/pull/2815) for SASL authentication.
## pg@8.8.0
- Bump minimum required version of [native bindings](https://github.com/brianc/node-postgres/pull/2787).
- Catch previously uncatchable errors thrown in [`pool.query`](https://github.com/brianc/node-postgres/pull/2569).
- Prevent the pool from blocking the event loop if all clients are [idle](https://github.com/brianc/node-postgres/pull/2721) (and `allowExitOnIdle` is enabled).
- Support `lock_timeout` in [client config](https://github.com/brianc/node-postgres/pull/2779).
- Fix errors thrown in callbacks from [interfering with cleanup](https://github.com/brianc/node-postgres/pull/2753).
- Add [ParameterDescription](https://github.com/brianc/node-postgres/pull/2464) support to protocol parsing.
- Fix typescript [typedefs](https://github.com/brianc/node-postgres/pull/2490) with `--isolatedModules`.
### pg-query-stream@4.0.0
- Library has been [converted](https://github.com/brianc/node-postgres/pull/2376) to Typescript. The behavior is identical, but there could be subtle breaking changes due to class names changing or other small inconsistencies introduced by the conversion.
- Switch to optional peer dependencies & remove [semver](https://github.com/brianc/node-postgres/commit/a02dfac5ad2e2abf0dc3a9817f953938acdc19b1) package which has been a small thorn in the side of a few users.
- Export `DatabaseError` from [pg-protocol](https://github.com/brianc/node-postgres/commit/58258430d52ee446721cc3e6611e26f8bcaa67f5).
- Add support for `sslmode` in the [connection string](https://github.com/brianc/node-postgres/commit/6be3b9022f83efc721596cc41165afaa07bfceb0).
### pg@8.3.0
- Support passing a [string of command line options flags](https://github.com/brianc/node-postgres/pull/2216) via the `{ options: string }` field on client/pool config.
### pg@8.2.0
- Switch internal protocol parser & serializer to [pg-protocol](https://github.com/brianc/node-postgres/tree/master/packages/pg-protocol). The change is backwards compatible but results in a significant performance improvement across the board, with some queries as much as 50% faster. This is the first work to land in an on-going performance improvement initiative I'm working on. Stay tuned as things are set to get much faster still! :rocket:
### pg-cursor@2.2.0
- Switch internal protocol parser & serializer to [pg-protocol](https://github.com/brianc/node-postgres/tree/master/packages/pg-protocol). The change is backwards compatible but results in a significant performance improvement across the board, with some queries as much as 50% faster.
### pg-query-stream@3.1.0
- Switch internal protocol parser & serializer to [pg-protocol](https://github.com/brianc/node-postgres/tree/master/packages/pg-protocol). The change is backwards compatible but results in a significant performance improvement across the board, with some queries as much as 50% faster.
### pg@8.1.0
- Switch to using [monorepo](https://github.com/brianc/node-postgres/tree/master/packages/pg-connection-string) version of `pg-connection-string`. This includes better support for SSL argument parsing from connection strings and ensures continuity of support.
- Add `&ssl=no-verify` option to connection string and `PGSSLMODE=no-verify` environment variable support for the pure JS driver. This is equivalent of passing `{ ssl: { rejectUnauthorized: false } }` to the client/pool constructor. The advantage of having support in connection strings and environment variables is it can be "externally" configured via environment variables and CLI arguments much more easily, and should remove the need to directly edit any application code for [the SSL default changes in 8.0](https://node-postgres.com/announcements#2020-02-25). This should make using `pg@8.x` significantly less difficult on environments like Heroku for example.
### pg-pool@3.2.0
- Same changes to `pg` impact `pg-pool` as they both use the same connection parameter and connection string parsing code for configuring SSL.
#### note: for detailed release notes please [check here](https://node-postgres.com/announcements#2020-02-25)
- Remove versions of node older than `6 lts` from the test matrix. `pg>=8.0` may still work on older versions but it is no longer officially supported.
- Change default behavior when not specifying `rejectUnauthorized` with the SSL connection parameters. Previously we defaulted to `rejectUnauthorized: false` when it was not specifically included. We now default to `rejectUnauthorized: true.` Manually specify `{ ssl: { rejectUnauthorized: false } }` for old behavior.
- Change [default database](https://github.com/brianc/node-postgres/pull/1679) when not specified to use the `user` config option if available. Previously `process.env.USER` was used.
- Change `pg.Pool` and `pg.Query` to [be](https://github.com/brianc/node-postgres/pull/2126) an [es6 class](https://github.com/brianc/node-postgres/pull/2063).
- Make `pg.native` non enumerable.
- `notice` messages are [no longer instances](https://github.com/brianc/node-postgres/pull/2090) of `Error`.
- Passwords no longer [show up](https://github.com/brianc/node-postgres/pull/2070) when instances of clients or pools are logged.
### pg@7.18.0
- This will likely be the last minor release before pg@8.0.
- This version contains a few bug fixes and adds a deprecation warning for [a pending change in 8.0](https://github.com/brianc/node-postgres/issues/2009#issuecomment-579371651) which will flip the default behavior over SSL from `rejectUnauthorized` from `false` to `true` making things more secure in the general use case.
### pg-query-stream@3.0.0
- [Rewrote stream internals](https://github.com/brianc/node-postgres/pull/2051) to better conform to node stream semantics. This should make pg-query-stream much better at respecting [highWaterMark](https://nodejs.org/api/stream.html#stream_new_stream_readable_options) and getting rid of some edge case bugs when using pg-query-stream as an async iterator. Due to the size and nature of this change (effectively a full re-write) it's safest to bump the semver major here, though almost all tests remain untouched and still passing, which brings us to a breaking change to the API....
- Changed `stream.close` to `stream.destroy` which is the [official](https://nodejs.org/api/stream.html#stream_readable_destroy_error) way to terminate a readable stream. This is a **breaking change** if you rely on the `stream.close` method on pg-query-stream...though should be just a find/replace type operation to upgrade as the semantics remain very similar (not exactly the same, since internals are rewritten, but more in line with how streams are "supposed" to behave).
- Unified the `config.batchSize` and `config.highWaterMark` to both do the same thing: control how many rows are buffered in memory. The `ReadableStream` will manage exactly how many rows are requested from the cursor at a time. This should give better out of the box performance and help with efficient async iteration.
### pg@7.17.0
- Add support for `idle_in_transaction_session_timeout` [option](https://github.com/brianc/node-postgres/pull/2049).
### 7.16.0
- Add optional, opt-in behavior to test new, [faster query pipeline](https://github.com/brianc/node-postgres/pull/2044). This is experimental, and not documented yet. The pipeline changes will grow significantly after the 8.0 release.
### 7.15.0
- Change repository structure to support lerna & future monorepo [development](https://github.com/brianc/node-postgres/pull/2014).
- [Warn about deprecation](https://github.com/brianc/node-postgres/pull/2021) for calling constructors without `new`.
### 7.14.0
- Reverts 7.13.0 as it contained [an accidental breaking change](https://github.com/brianc/node-postgres/pull/2010) for self-signed SSL cert verification. 7.14.0 is identical to 7.12.1.
### 7.13.0
- Add support for [all tls.connect()](https://github.com/brianc/node-postgres/pull/1996) options.
### 7.12.0
- Add support for [async password lookup](https://github.com/brianc/node-postgres/pull/1926).
### 7.11.0
- Add support for [connection_timeout](https://github.com/brianc/node-postgres/pull/1847/files#diff-5391bde944956870128be1136e7bc176R63) and [keepalives_idle](https://github.com/brianc/node-postgres/pull/1847).
### 7.10.0
- Add support for [per-query types](https://github.com/brianc/node-postgres/pull/1825).
### 7.9.0
- Add support for [sasl/scram authentication](https://github.com/brianc/node-postgres/pull/1835).
### 7.8.0
- Add support for passing [secureOptions](https://github.com/brianc/node-postgres/pull/1804) SSL config.
- Upgrade [pg-types](https://github.com/brianc/node-postgres/pull/1806) to 2.0.
### 7.7.0
- Add support for configurable [query timeout](https://github.com/brianc/node-postgres/pull/1760) on a client level.
### 7.6.0
- Add support for ["bring your own promise"](https://github.com/brianc/node-postgres/pull/1518)
### 7.5.0
- Better [error message](https://github.com/brianc/node-postgres/commit/11a4793452d618c53e019416cc886ad38deb1aa7) when passing `null` or `undefined` to `client.query`.
- Better [error handling](https://github.com/brianc/node-postgres/pull/1503) on queued queries.
### 7.4.0
- Add support for [Uint8Array](https://github.com/brianc/node-postgres/pull/1448) values.
### 7.3.0
- Add support for [statement timeout](https://github.com/brianc/node-postgres/pull/1436).
### 7.2.0
- Pinned pg-pool and pg-types to a tighter semver range. This is likely not a noticeable change for you unless you were specifically installing older versions of those libraries for some reason, but making it a minor bump here just in case it could cause any confusion.
### 7.1.0
#### Enhancements
- [You can now supply both a connection string and additional config options to clients.](https://github.com/brianc/node-postgres/pull/1363)
### 7.0.0
#### Breaking Changes
- Drop support for node <`4.x`.
- Remove `pg.connect``pg.end` and `pg.cancel` singleton methods.
- `Client#connect(callback)` now returns `undefined`. It used to return an event emitter.
- Upgrade [pg-pool](https://github.com/brianc/node-pg-pool) to `2.x`.
- Upgrade [pg-native](https://github.com/brianc/node-pg-native) to `2.x`.
- Standardize error message fields between JS and native driver. The only breaking changes were in the native driver as its field names were brought into alignment with the existing JS driver field names.
- Result from multi-statement text queries such as `SELECT 1; SELECT 2;` are now returned as an array of results instead of a single result with 1 array containing rows from both queries.
[Please see here for a migration guide](https://node-postgres.com/guides/upgrading)
- Add `Client#connect() => Promise<void>` and `Client#end() => Promise<void>` calls. Promises are now returned from all async methods on clients _if and only if_ no callback was supplied to the method.
- Add `connectionTimeoutMillis` to pg-pool.
### v6.2.0
- Add support for [parsing `replicationStart` messages](https://github.com/brianc/node-postgres/pull/1271/files).
### v6.1.0
- Add optional callback parameter to the pure JavaScript `client.end` method. The native client already supported this.
### v6.0.0
#### Breaking Changes
- Remove `pg.pools`. There is still a reference kept to the pools created & tracked by `pg.connect` but it has been renamed, is considered private, and should not be used. Accessing this API directly was uncommon and was _supposed_ to be private but was incorrectly documented on the wiki. Therefore, it is a breaking change of an (unintentionally) public interface to remove it by renaming it & making it private. Eventually `pg.connect` itself will be deprecated in favor of instantiating pools directly via `new pg.Pool()` so this property should become completely moot at some point. In the mean time...check out the new features...
#### New features
- Replace internal pooling code with [pg-pool](https://github.com/brianc/node-pg-pool). This is the first step in eventually deprecating and removing the singleton `pg.connect`. The pg-pool constructor is exported from node-postgres at `require('pg').Pool`. It provides a backwards compatible interface with `pg.connect` as well as a promise based interface & additional niceties.
You can now create an instance of a pool and don't have to rely on the `pg` singleton for anything:
```
var pg = require('pg')
var pool = new pg.Pool()
// your friendly neighborhood pool interface, without the singleton
pool.connect(function(err, client, done) {
// ...
})
```
Promise support & other goodness lives now in [pg-pool](https://github.com/brianc/node-pg-pool).
**Please** read the readme at [pg-pool](https://github.com/brianc/node-pg-pool) for the full api.
- Included support for tcp keep alive. Enable it as follows:
```js
var client = new Client({ keepAlive: true })
```
This should help with backends incorrectly considering idle clients to be dead and prematurely disconnecting them.
### v5.1.0
- Make the query object returned from `client.query` implement the promise interface. This is the first step towards promisifying more of the node-postgres api.
Example:
```js
var client = new Client()
client.connect()
client.query('SELECT $1::text as name', ['brianc']).then(function (res) {
console.log('hello from', res.rows[0])
client.end()
})
```
### v5.0.0
#### Breaking Changes
- `require('pg').native` now returns null if the native bindings cannot be found; previously, this threw an exception.
#### New Features
- better error message when passing `undefined` as a query parameter
- support for `defaults.connectionString`
- support for `returnToHead` being passed to [generic pool](https://github.com/coopernurse/node-pool)
### v4.5.0
- Add option to parse JS date objects in query parameters as [UTC](https://github.com/brianc/node-postgres/pull/943)
### v4.4.0
- Warn to `stderr` if a named query exceeds 63 characters which is the max length supported by postgres.
### v4.3.0
- Unpin `pg-types` semver. Allow it to float against `pg-types@1.x`.
### v4.2.0
- Support for additional error fields in postgres >= 9.3 if available.
### v4.1.0
- Allow type parser overrides on a [per-client basis](https://github.com/brianc/node-postgres/pull/679)
### v4.0.0
- Make [native bindings](https://github.com/brianc/node-pg-native.git) an optional install with `npm install pg-native`
- No longer surround query result callback with `try/catch` block.
- Remove built in COPY IN / COPY OUT support - better implementations provided by [pg-copy-streams](https://github.com/brianc/node-pg-copy-streams.git) and [pg-native](https://github.com/brianc/node-pg-native.git)
### v3.6.0
- Include support for (parsing JSONB)[https://github.com/brianc/node-pg-types/pull/13] (supported in postgres 9.4)
### v3.5.0
- Include support for parsing boolean arrays
### v3.4.0
- Include port as connection parameter to [unix sockets](https://github.com/brianc/node-postgres/pull/604)
- Better support for odd [date parsing](https://github.com/brianc/node-pg-types/pull/8)
### v3.2.0
- Add support for parsing [date arrays](https://github.com/brianc/node-pg-types/pull/3)
- Expose array parsers on [pg.types](https://github.com/brianc/node-pg-types/pull/2)
- Allow [pool](https://github.com/brianc/node-postgres/pull/591) to be configured
### v3.1.0
- Add [count of the number of times a client has been checked out from the pool](https://github.com/brianc/node-postgres/pull/556)
- Emit `end` from `pg` object [when a pool is drained](https://github.com/brianc/node-postgres/pull/571)
### v3.0.0
#### Breaking changes
- [Parse the DATE PostgreSQL type as local time](https://github.com/brianc/node-postgres/pull/514)
After [some discussion](https://github.com/brianc/node-postgres/issues/510) it was decided node-postgres was non-compliant in how it was handling DATE results. They were being converted to UTC, but the PostgreSQL documentation specifies they should be returned in the client timezone. This is a breaking change, and if you use the `date` type you might want to examine your code and make sure nothing is impacted.
- [Fix possible numeric precision loss on numeric & int8 arrays](https://github.com/brianc/node-postgres/pull/501)
pg@v2.0 included changes to not convert large integers into their JavaScript number representation because of possibility for numeric precision loss. The same types in arrays were not taken into account. This fix applies the same type of type-coercion rules to arrays of those types, so there will be no more possible numeric loss on an array of very large int8s for example. This is a breaking change because now a return type from a query of `int8[]` will contain _string_ representations
of the integers. Use your favorite JavaScript bignum module to represent them without precision loss, or punch over the type converter to return the old style arrays again.
- [Fix to input array of dates being improperly converted to utc](https://github.com/benesch/node-postgres/commit/c41eedc3e01e5527a3d5c242fa1896f02ef0b261#diff-7172adb1fec2457a2700ed29008a8e0aR108)
Single `date` parameters were properly sent to the PostgreSQL server properly in local time, but an input array of dates was being changed into utc dates. This is a violation of what PostgreSQL expects. Small breaking change, but none-the-less something you should check out if you are inserting an array of dates.
- [Query no longer emits `end` event if it ends due to an error](https://github.com/brianc/node-postgres/commit/357b64d70431ec5ca721eb45a63b082c18e6ffa3)
This is a small change to bring the semantics of query more in line with other EventEmitters. The tests all passed after this change, but I suppose it could still be a breaking change in certain use cases. If you are doing clever things with the `end` and `error` events of a query object you might want to check to make sure its still behaving normally, though it is most likely not an issue.
The long & short of it is now any object you supply in the list of query values will be inspected for a `.toPostgres` method. If the method is present it will be called and its result used as the raw text value sent to PostgreSQL for that value. This allows the same type of custom type coercion on query parameters as was previously afforded to query result values.
If domains are active node-postgres will honor them and do everything it can to ensure all callbacks are properly fired in the active domain. If you have tried to use domains with node-postgres (or many other modules which pool long lived event emitters) you may have run into an issue where the active domain changes before and after a callback. This has been a longstanding footgun within node-postgres and I am happy to get it fixed.
- [Disconnected clients now removed from pool](https://github.com/brianc/node-postgres/pull/543)
Avoids a scenario where your pool could fill up with disconnected & unusable clients.
- [Break type parsing code into separate module](https://github.com/brianc/node-postgres/pull/541)
To provide better documentation and a clearer explanation of how to override the query result parsing system we broke the type converters [into their own module](https://github.com/brianc/node-pg-types). There is still work around removing the 'global-ness' of the type converters so each query or connection can return types differently, but this is a good first step and allow a lot more obvious way to return int8 results as JavaScript numbers, for example
### v2.11.0
- Add support for [application_name](https://github.com/brianc/node-postgres/pull/497)
### v2.10.0
- Add support for [the password file](http://www.postgresql.org/docs/9.3/static/libpq-pgpass.html)
### v2.9.0
- Add better support for [unix domain socket](https://github.com/brianc/node-postgres/pull/487) connections
### v2.8.0
- Add support for parsing JSON[] and UUID[] result types
### v2.7.0
- Use single row mode in native bindings when available [@rpedela]
- reduces memory consumption when handling row values in 'row' event
- Automatically bind buffer type parameters as binary [@eugeneware]
### v2.6.0
- Respect PGSSLMODE environment variable
### v2.5.0
- Ability to opt-in to int8 parsing via `pg.defaults.parseInt8 = true`
### v2.4.0
- Use eval in the result set parser to increase performance
### v2.3.0
- Remove built-in support for binary Int64 parsing.
_Due to the low usage & required compiled dependency this will be pushed into a 3rd party add-on_
### v2.2.0
- [Add support for excapeLiteral and escapeIdentifier in both JavaScript and the native bindings](https://github.com/brianc/node-postgres/pull/396)
### v2.1.0
- Add support for SSL connections in JavaScript driver
- this means you can connect to heroku postgres from your local machine without the native bindings!
- [Add field metadata to result object](https://github.com/brianc/node-postgres/blob/master/test/integration/client/row-description-on-results-tests.js)
- [Add ability for rows to be returned as arrays instead of objects](https://github.com/brianc/node-postgres/blob/master/test/integration/client/results-as-array-tests.js)
### v2.0.0
- Properly handle various PostgreSQL to JavaScript type conversions to avoid data loss:
For more information see https://github.com/brianc/node-postgres/pull/353
If you are unhappy with these changes you can always [override the built in type parsing fairly easily](https://github.com/brianc/node-pg-parse-float).
### v1.3.0
- Make client_encoding configurable and optional
### v1.2.0
- return field metadata on result object: access via result.fields[i].name/dataTypeID
### v1.1.0
- built in support for `JSON` data type for PostgreSQL Server @ v9.2.0 or greater
### v1.0.0
- remove deprecated functionality
- Callback function passed to `pg.connect` now **requires** 3 arguments
All major and minor releases are briefly explained below.
For richer information consult the commit log on github with referenced pull requests.
We do not include break-fix version release in this file.
### v2.3.0
- Remove built-in support for binary Int64 parsing.
_Due to the low usage & required compiled dependency this will be pushed into a 3rd party add-on_
### v2.2.0
- [Add support for excapeLiteral and escapeIdentifier in both JavaScript and the native bindings](https://github.com/brianc/node-postgres/pull/396)
### v2.1.0
- Add support for SSL connections in JavaScript driver
- this means you can connect to heroku postgres from your local machine without the native bindings!
- [Add field metadata to result object](https://github.com/brianc/node-postgres/blob/master/test/integration/client/row-description-on-results-tests.js)
- [Add ability for rows to be returned as arrays instead of objects](https://github.com/brianc/node-postgres/blob/master/test/integration/client/results-as-array-tests.js)
### v2.0.0
- Properly handle various PostgreSQL to JavaScript type conversions to avoid data loss:
For more information see https://github.com/brianc/node-postgres/pull/353
If you are unhappy with these changes you can always [override the built in type parsing fairly easily](https://github.com/brianc/node-pg-parse-float).
### v1.3.0
- Make client_encoding configurable and optional
### v1.2.0
- return field metadata on result object: access via result.fields[i].name/dataTypeID
### v1.1.0
- built in support for `JSON` data type for PostgreSQL Server @ v9.2.0 or greater
### v1.0.0
- remove deprecated functionality
- Callback function passed to `pg.connect` now __requires__ 3 arguments
<spanclass="badge-npmversion"><ahref="https://npmjs.org/package/pg"title="View this project on NPM"><imgsrc="https://img.shields.io/npm/v/pg.svg"alt="NPM version"/></a></span>
<spanclass="badge-npmdownloads"><ahref="https://npmjs.org/package/pg"title="View this project on NPM"><imgsrc="https://img.shields.io/npm/dm/pg.svg"alt="NPM downloads"/></a></span>
PostgreSQL client for node.js. Pure JavaScript and native libpq bindings.
Non-blocking PostgreSQL client for Node.js. Pure JavaScript and optional native libpq bindings.
## Installation
## Monorepo
npm install pg
## Examples
This repo is a monorepo which contains the core [pg](https://github.com/brianc/node-postgres/tree/master/packages/pg) module as well as a handful of related modules.
Connect to a postgres instance, run a query, and disconnect.
```javascript
var pg = require('pg');
//or native libpq bindings
//var pg = require('pg').native
var conString = "postgres://postgres:1234@localhost/postgres";
var client = new pg.Client(conString);
client.connect(function(err) {
if(err) {
return console.error('could not connect to postgres', err);
}
client.query('SELECT NOW() AS "theTime"', function(err, result) {
if(err) {
return console.error('error running query', err);
}
console.log(result.rows[0].theTime);
//output: Tue Jan 15 2013 19:12:47 GMT-600 (CST)
client.end();
});
});
## Install
```
### Client pooling
Typically you will access the PostgreSQL server through a pool of clients. node-postgres ships with a built in pool to help get you up and running quickly.
```javascript
var pg = require('pg');
var conString = "postgres://postgres:1234@localhost/postgres";
return console.error('error fetching client from pool', err);
}
client.query('SELECT $1::int AS numbor', ['1'], function(err, result) {
//call `done()` to release the client back to the pool
done();
if(err) {
return console.error('error running query', err);
}
console.log(result.rows[0].numbor);
//output: 1
});
});
npm install pg
```
## Documentation
Documentation is a work in progress primarily taking place on the github WIKI
Each package in this repo should have its own readme more focused on how to develop/contribute. For overall documentation on the project and the related modules managed by this repo please see:
The source repo for the documentation is available for contribution [here](https://github.com/brianc/node-postgres/tree/master/docs).
node-postgres contains a pure JavaScript driver and also exposes JavaScript bindings to libpq. You can use either interface. I personally use the JavaScript bindings as the are quite fast, and I like having everything implemented in JavaScript.
### Features
To use native libpq bindings replace `require('pg')` with `require('pg').native`.
- Pure JavaScript client and native libpq bindings share _the same API_
- Connection pooling
- Extensible JS ↔ PostgreSQL data-type coercion
- Supported PostgreSQL features
- Parameterized queries
- Named statements with query plan caching
- Async notifications with `LISTEN/NOTIFY`
- Bulk import & export with `COPY TO/COPY FROM`
The two share the same interface so __no other code changes should be required__. If you find yourself having to change code other than the require statement when switching from `pg` to `pg.native`, please report an issue.
### Extras
## Features
* pure JavaScript client and native libpq bindings share _the same api_
* optional connection pooling
* extensible js<->postgresql data-type coercion
* supported PostgreSQL features
* parameterized queries
* named statements with query plan caching
* async notifications with `LISTEN/NOTIFY`
* bulk import & export with `COPY TO/COPY FROM`
## Contributing
__I love contributions.__
You are welcome contribute via pull requests. If you need help getting the tests running locally feel free to email me or gchat me.
I will __happily__ accept your pull request if it:
- _has tests_
- looks reasonable
- does not break backwards compatibility
- satisfies jshint
Information about the testing processes is in the [wiki](https://github.com/brianc/node-postgres/wiki/Testing).
If you need help or have questions about constructing a pull request I'll be glad to help out as well.
node-postgres is by design pretty light on abstractions. These are some handy modules we've been using over the years to complete the picture.
The entire list can be found on our [wiki](https://github.com/brianc/node-postgres/wiki/Extras).
## Support
If at all possible when you open an issue please provide
- version of node
- version of postgres
node-postgres is free software. If you encounter a bug with the library please open an issue on the [GitHub repo](https://github.com/brianc/node-postgres). If you have questions unanswered by the documentation please open an issue pointing out how the documentation was unclear & I will do my best to make it better!
When you open an issue please provide:
- version of Node
- version of Postgres
- smallest possible snippet of code to reproduce the problem
Usually I'll pop the code into the repo as a test. Hopefully the test fails. Then I make the test pass. Then everyone's happy!
You can also follow me [@briancarlson](https://twitter.com/briancarlson) if that's your thing. I try to always announce noteworthy changes & developments with node-postgres on Twitter.
## Sponsorship :two_hearts:
If you need help or run into _any_ issues getting node-postgres to work on your system please report a bug or contact me directly. I am usually available via google-talk at my github account public email address.
node-postgres's continued development has been made possible in part by generous financial support from [the community](https://github.com/brianc/node-postgres/blob/master/SPONSORS.md).
I usually tweet about any important status updates or changes to node-postgres.
Follow me [@briancarlson](https://twitter.com/briancarlson) to keep up to date.
If you or your company are benefiting from node-postgres and would like to help keep the project financially sustainable [please consider supporting](https://github.com/sponsors/brianc) its development.
### Featured sponsor
## Extras
Special thanks to [medplum](https://medplum.com) for their generous and thoughtful support of node-postgres!
node-postgres is by design _low level_ with the bare minimum of abstraction. These might help out:
I will **happily** accept your pull request if it:
_If you use node-postgres in production and would like your site listed here, fork & add it._
- **has tests**
- looks reasonable
- does not break backwards compatibility
If your change involves breaking backwards compatibility please please point that out in the pull request & we can discuss & plan when and how to release it and what type of documentation or communication it will require.
### Setting up for local development
1. Clone the repo
2. Ensure you have installed libpq-dev in your system.
3. From your workspace root run `yarn` and then `yarn lerna bootstrap`
4. Ensure you have a PostgreSQL instance running with SSL enabled and an empty database for tests
5. Ensure you have the proper environment variables configured for connecting to the instance
6. Run `yarn test` to run all the tests
## Troubleshooting and FAQ
The causes and solutions to common errors can be found among the [Frequently Asked Questions (FAQ)](https://github.com/brianc/node-postgres/wiki/FAQ)
## License
Copyright (c) 2010 Brian Carlson (brian.m.carlson@gmail.com)
Copyright (c) 2010-2020 Brian Carlson (brian.m.carlson@gmail.com)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
node-postgres is made possible by the helpful contributors from the community as well as the following generous supporters on [GitHub Sponsors](https://github.com/sponsors/brianc) and [Patreon](https://www.patreon.com/node_postgres).
This is the documentation for node-postgres which is currently hosted at [https://node-postgres.com](https://node-postgres.com).
## Development
To run the documentation locally, you need to have [Node.js](https://nodejs.org) installed. Then, you can clone the repository and install the dependencies:
```bash
cd docs
yarn
```
Once you've installed the deps, you can run the development server:
```bash
yarn dev
```
This will start a local server at [http://localhost:3000](http://localhost:3000) where you can view the documentation and see your changes.
`pg@8.0` is [being released](https://github.com/brianc/node-postgres/pull/2117) which contains a handful of breaking changes.
I will outline each breaking change here and try to give some historical context on them. Most of them are small and subtle and likely wont impact you; **however**, there is one larger breaking change you will likely run into:
---
- Support all `tls.connect` [options](https://nodejs.org/api/tls.html#tls_tls_connect_options_callback) being passed to the client/pool constructor under the `ssl` option.
Previously we white listed the parameters passed here and did slight massaging of some of them. The main **breaking** change here is that now if you do this:
```js
const client = new Client({ ssl: true })
```
<Alert>
Now we will use the default ssl options to tls.connect which includes rejectUnauthorized being enabled. This means
your connection attempt may fail if you are using a self-signed cert. To use the old behavior you should do this:
This makes pg a bit more secure "out of the box" while still enabling you to opt in to the old behavior.
---
The rest of the changes are relatively minor & you likely wont need to do anything, but good to be aware none the less!
- change default database name
If a database name is not specified, available in the environment at `PGDATABASE`, or available at `pg.defaults`, we used to use the username of the process user as the name of the database. Now we will use the `user` property supplied to the client as the database name, if it exists. What this means is this:
```jsx
new Client({
user: 'foo',
})
```
`pg@7.x` will default the database name to the _process_ user. `pg@8.x` will use the `user` property supplied to the client. If you have not supplied `user` to the client, and it isn't available through any of its existing lookup mechanisms (environment variables, pg.defaults) then it will still use the process user for the database name.
- drop support for versions of node older than 8.0
Node@6.0 has been out of LTS for quite some time now, and I've removed it from our test matrix. `pg@8.0` _may_ still work on older versions of node, but it isn't a goal of the project anymore. Node@8.0 is actually no longer in the LTS support line, but pg will continue to test against and support 8.0 until there is a compelling reason to drop support for it. Any security vulnerability issues which come up I will back-port fixes to the `pg@7.x` line and do a release, but any other fixes or improvements will not be back ported.
- prevent password from being logged accidentally
`pg@8.0` makes the password field on the pool and client non-enumerable. This means when you do `console.log(client)` you wont have your database password printed out unintentionally. You can still do `console.log(client.password)` if you really want to see it!
- make `pg.native` non-enumerable
You can use `pg.native.Client` to access the native client. The first time you access the `pg.native` getter it imports the native bindings...which must be installed. In some cases (such as webpacking the pg code for lambda deployment) the `.native` property would be traversed and trigger an import of the native bindings as a side-effect. Making this property non-enumerable will fix this issue. An easy fix, but its technically a breaking change in cases where people _are_ relying on this side effect for any reason.
- make `pg.Pool` an es6 class
This makes extending `pg.Pool` possible. Previously it was not a "proper" es6 class and `class MyPool extends pg.Pool` wouldn't work.
- make `Notice` messages _not_ an instance of a JavaScript error
The code path for parsing `notice` and `error` messages from the postgres backend is the same. Previously created a JavaScript `Error` instance for _both_ of these message types. Now, only actual `errors` from the postgres backend will be an instance of an `Error`. The _shape_ and _properties_ of the two messages did not change outside of this.
- monorepo
While not technically a breaking change for the module itself, I have begun the process of [consolidating](https://github.com/brianc/node-pg-query-stream) [separate](https://github.com/brianc/node-pg-cursor/) [repos](https://github.com/brianc/node-pg-pool) into the main [repo](https://github.com/brianc/node-postgres) and converted it into a monorepo managed by lerna. This will help me stay on top of issues better (it was hard to bounce between 3-4 separate repos) and coordinate bug fixes and changes between dependant modules.
Thanks for reading that! pg tries to be super pedantic about not breaking backwards-compatibility in non semver major releases....even for seemingly small things. If you ever notice a breaking change on a semver minor/patch release please stop by the [repo](https://github.com/brianc/node-postgres) and open an issue!
_If you find `pg` valuable to you or your business please consider [supporting](http://github.com/sponsors/brianc) it's continued development! Big performance improvements, typescript, better docs, query pipelining and more are all in the works!_
## 2019-07-18
### New documentation
After a _very_ long time on my todo list I've ported the docs from my old hand-rolled webapp running on route53 + elb + ec2 + dokku (I know, I went overboard!) to [gatsby](https://www.gatsbyjs.org/) hosted on [netlify](https://www.netlify.com/) which is _so_ much easier to manage. I've released the code at [https://github.com/brianc/node-postgres-docs](https://github.com/brianc/node-postgres-docs) and invite your contributions! Let's make this documentation better together. Any time changes are merged to master on the documentation repo it will automatically deploy.
If you see an error in the docs, big or small, use the "edit on GitHub" button to edit the page & submit a pull request right there. I'll get a new version out ASAP with your changes! If you want to add new pages of documentation open an issue if you need guidance, and I'll help you get started.
I want to extend a special **thank you** to all the [supporters](https://github.com/brianc/node-postgres/blob/master/SPONSORS.md) and [contributors](https://github.com/brianc/node-postgres/graphs/contributors) to the project that have helped keep me going through times of burnout or life "getting in the way." ❤️
It's been quite a journey, and I look forward continuing it for as long as I can provide value to all y'all. 🤠
## 2017-08-12
### code execution vulnerability
Today [@sehrope](https://github.com/sehrope) found and reported a code execution vulnerability in node-postgres. This affects all versions from `pg@2.x` through `pg@7.1.0`.
I have published a fix on the tip of each major version branch of all affected versions as well as a fix on each minor version branch of `pg@6.x` and `pg@7.x`:
### Fixes
The following versions have been published to npm & contain a patch to fix the vulnerability:
```
pg@2.11.2
pg@3.6.4
pg@4.5.7
pg@5.2.1
pg@6.0.5
pg@6.1.6
pg@6.2.5
pg@6.3.3
pg@6.4.2
pg@7.0.3
pg@7.1.2
```
### Example
To demonstrate the issue & see if you are vulnerable execute the following in node:
```js
import pg from 'pg'
const { Client } = pg
const client = new Client()
client.connect()
const sql = `SELECT 1 AS "\\'/*", 2 AS "\\'*/\n + console.log(process.env)] = null;\n//"`
client.query(sql, (err, res) => {
client.end()
})
```
You will see your environment variables printed to your console. An attacker can use this exploit to execute any arbitrary node code within your process.
### Impact
This vulnerability _likely_ does not impact you if you are connecting to a database you control and not executing user-supplied sql. Still, you should **absolutely** upgrade to the most recent patch version as soon as possible to be safe.
Two attack vectors we quickly thought of:
- 1 - executing unsafe, user-supplied sql which contains a malicious column name like the one above.
- 2 - connecting to an untrusted database and executing a query which returns results where any of the column names are malicious.
### Support
I have created [an issue](https://github.com/brianc/node-postgres/issues/1408) you can use to discuss the vulnerability with me or ask questions, and I have reported this issue [on twitter](https://twitter.com/briancarlson) and directly to Heroku and [nodesecurity.io](https://nodesecurity.io/).
I take security very seriously. If you or your company benefit from node-postgres **[please sponsor my work](https://www.patreon.com/node_postgres)**: this type of issue is one of the many things I am responsible for, and I want to be able to continue to tirelessly provide a world-class PostgreSQL experience in node for years to come.
Every field of the `config` object is entirely optional. A `Client` instance will use [environment variables](/features/connecting#environment-variables) for all missing values.
password?: string or function, //default process.env.PGPASSWORD
host?: string, // default process.env.PGHOST
port?: number, // default process.env.PGPORT
database?: string, // default process.env.PGDATABASE || user
connectionString?: string, // e.g. postgres://user:password@host:5432/database
ssl?: any, // passed directly to node.TLSSocket, supports all tls.connect options
types?: any, // custom type parsers
statement_timeout?: number, // number of milliseconds before a statement in query will time out, default is no timeout
query_timeout?: number, // number of milliseconds before a query call will timeout, default is no timeout
lock_timeout?: number, // number of milliseconds a query is allowed to be en lock state before it's cancelled due to lock timeout
application_name?: string, // The name of the application that created this Client instance
connectionTimeoutMillis?: number, // number of milliseconds to wait for connection, default is no timeout
keepAliveInitialDelayMillis?: number, // set the initial delay before the first keepalive probe is sent on an idle socket
idle_in_transaction_session_timeout?: number, // number of milliseconds before terminating any session with an open idle transaction, default is no timeout
client_encoding?: string, // specifies the character set encoding that the database uses for sending data to the client
fallback_application_name?: string, // provide an application name to use if application_name is not set
options?: string // command-line options to be sent to the server
}
```
example to create a client with specific connection information:
```js
import { Client } from 'pg'
const client = new Client({
user: 'database-user',
password: 'secretpassword!!',
host: 'my.database-server.com',
port: 5334,
database: 'database-name',
})
```
## client.connect
```js
import { Client } from 'pg'
const client = new Client()
await client.connect()
```
## client.query
### QueryConfig
You can pass an object to `client.query` with the signature of:
```ts
type QueryConfig {
// the raw query text
text: string;
// an array of query parameters
values?: Array<any>;
// name of the query - used for prepared statements
name?: string;
// by default rows come out as a key/value pair for each row
// pass the string 'array' here to receive rows as an array of values
If you pass a `name` parameter to the `client.query` method, the client will create a [prepared statement](/features/queries#prepared-statements).
```js
const query = {
name: 'get-name',
text: 'SELECT $1::text',
values: ['brianc'],
rowMode: 'array',
}
const result = await client.query(query)
console.log(result.rows) // ['brianc']
await client.end()
```
**client.query with a `Submittable`**
If you pass an object to `client.query` and the object has a `.submit` function on it, the client will pass it's PostgreSQL server connection to the object and delegate query dispatching to the supplied object. This is an advanced feature mostly intended for library authors. It is incidentally also currently how the callback and promise based queries above are handled internally, but this is subject to change. It is also how [pg-cursor](https://github.com/brianc/node-pg-cursor) and [pg-query-stream](https://github.com/brianc/node-pg-query-stream) work.
```js
import { Query } from 'pg'
const query = new Query('select $1::text as name', ['brianc'])
const result = client.query(query)
assert(query === result) // true
query.on('row', (row) => {
console.log('row!', row) // { name: 'brianc' }
})
query.on('end', () => {
console.log('query done')
})
query.on('error', (err) => {
console.error(err.stack)
})
```
---
## client.end
Disconnects the client from the PostgreSQL server.
```js
await client.end()
console.log('client has disconnected')
```
## events
### error
```ts
client.on('error', (err: Error) => void) => void
```
When the client is in the process of connecting, dispatching a query, or disconnecting it will catch and forward errors from the PostgreSQL server to the respective `client.connect` `client.query` or `client.end` promise; however, the client maintains a long-lived connection to the PostgreSQL back-end and due to network partitions, back-end crashes, fail-overs, etc the client can (and over a long enough time period _will_) eventually be disconnected while it is idle. To handle this you may want to attach an error listener to a client to catch errors. Here's a contrived example:
```js
const client = new pg.Client()
client.connect()
client.on('error', (err) => {
console.error('something bad has happened!', err.stack)
})
// walk over to server, unplug network cable
// process output: 'something bad has happened!' followed by stacktrace :P
```
### end
```ts
client.on('end') => void
```
When the client disconnects from the PostgreSQL server it will emit an end event once.
A cursor can be used to efficiently read through large result sets without loading the entire result-set into memory ahead of time. It's useful to simulate a 'streaming' style read of data, or exit early from a large result set. The cursor is passed to `client.query` and is dispatched internally in a way very similar to how normal queries are sent, but the API it presents for consuming the result set is different.
Read `rowCount` rows from the cursor instance. The callback will be called when the rows are available, loaded into memory, parsed, and converted to JavaScript types.
If the cursor has read to the end of the result sets all subsequent calls to cursor#read will return a 0 length array of rows. Calling `read` on a cursor that has read to the end.
Here is an example of reading to the end of a cursor:
```js
import { Pool } from 'pg'
import Cursor from 'pg-cursor'
const pool = new Pool()
const client = await pool.connect()
const cursor = client.query(new Cursor('select * from generate_series(0, 5)'))
let rows = await cursor.read(100)
assert(rows.length == 6)
rows = await cursor.read(100)
assert(rows.length == 0)
```
## close
### `cursor.close() => Promise<void>`
Used to close the cursor early. If you want to stop reading from the cursor before you get all of the rows returned, call this.
The pool is initially created empty and will create new clients lazily as they are needed. Every field of the `config` object is entirely optional. The config passed to the pool is also passed to every client instance within the pool when the pool creates that client.
```ts
type Config = {
// all valid client config options are also valid here
// in addition here are the pool specific configuration parameters:
// number of milliseconds to wait before timing out when connecting a new client
// by default this is 0 which means no timeout
connectionTimeoutMillis?: number
// number of milliseconds a client must sit idle in the pool and not be checked out
// before it is disconnected from the backend and discarded
// default is 10000 (10 seconds) - set to 0 to disable auto-disconnection of idle clients
idleTimeoutMillis?: number
// maximum number of clients the pool should contain
// by default this is set to 10. There is some nuance to setting the maximum size of your pool.
// see https://node-postgres.com/guides/pool-sizing for more information
max?: number
// minimum number of clients the pool should hold on to and _not_ destroy with the idleTimeoutMillis
// this can be useful if you get very bursty traffic and want to keep a few clients around.
// note: current the pool will not automatically create and connect new clients up to the min, it will
// only not evict and close clients except those which exceed the min count.
// the default is 0 which disables this behavior.
min?: number
// Default behavior is the pool will keep clients open & connected to the backend
// until idleTimeoutMillis expire for each client and node will maintain a ref
// to the socket on the client, keeping the event loop alive until all clients are closed
// after being idle or the pool is manually shutdown with `pool.end()`.
//
// Setting `allowExitOnIdle: true` in the config will allow the node event loop to exit
// as soon as all clients in the pool are idle, even if their socket is still open
// to the postgres server. This can be handy in scripts & tests
// where you don't want to wait for your clients to go idle before your process exits.
allowExitOnIdle?: boolean
// Sets a max overall life for the connection.
// A value of 60 would evict connections that have been around for over 60 seconds,
// regardless of whether they are idle. It's useful to force rotation of connection pools through
// middleware so that you can rotate the underlying servers. The default is disabled (value of zero)
maxLifetimeSeconds?: number
}
```
example to create a new pool with configuration:
```js
import { Pool } from 'pg'
const pool = new Pool({
host: 'localhost',
user: 'database-user',
max: 20,
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 2000,
maxLifetimeSeconds: 60
})
```
## pool.query
Often we only need to run a single query on the database, so as convenience the pool has a method to run a query on the first available idle client and return its result.
const result = await pool.query('SELECT $1::text as name', ['brianc'])
console.log(result.rows[0].name) // brianc
```
Notice in the example above there is no need to check out or release a client. The pool is doing the acquiring and releasing internally. I find `pool.query` to be a handy shortcut in many situations and I use it exclusively unless I need a transaction.
<Alert>
<div>
Do <strong>not</strong> use <code>pool.query</code> if you are using a transaction.
</div>
The pool will dispatch every query passed to pool.query on the first available idle client. Transactions within PostgreSQL
are scoped to a single client and so dispatching individual queries within a single transaction across multiple, random
clients will cause big problems in your app and not work. For more info please read <a href="/features/transactions">
transactions
</a>.
</Alert>
## pool.connect
`pool.connect() => Promise<pg.Client>`
Acquires a client from the pool.
- If there are idle clients in the pool one will be returned to the callback on `process.nextTick`.
- If the pool is not full but all current clients are checked out a new client will be created & returned to this callback.
- If the pool is 'full' and all clients are currently checked out, requests will wait in a FIFO queue until a client becomes available by being released back to the pool.
```js
import { Pool } from 'pg'
const pool = new Pool()
const client = await pool.connect()
await client.query('SELECT NOW()')
client.release()
```
### releasing clients
`client.release(destroy?: boolean) => void`
Client instances returned from `pool.connect` will have a `release` method which will release them from the pool.
The `release` method on an acquired client returns it back to the pool. If you pass a truthy value in the `destroy` parameter, instead of releasing the client to the pool, the pool will be instructed to disconnect and destroy this client, leaving a space within itself for a new client.
```js
import { Pool } from 'pg'
const pool = new Pool()
// check out a single client
const client = await pool.connect()
// release the client
client.release()
```
```js
import { Pool } from 'pg'
const pool = new Pool()
assert(pool.totalCount === 0)
assert(pool.idleCount === 0)
const client = await pool.connect()
await client.query('SELECT NOW()')
assert(pool.totalCount === 1)
assert(pool.idleCount === 0)
// tell the pool to destroy this client
await client.release(true)
assert(pool.idleCount === 0)
assert(pool.totalCount === 0)
```
<Alert>
<div>
You <strong>must</strong> release a client when you are finished with it.
</div>
If you forget to release the client then your application will quickly exhaust available, idle clients in the pool and
all further calls to <code>pool.connect</code> will timeout with an error or hang indefinitely if you have <code>
connectionTimeoutMillis
</code> configured to 0.
</Alert>
## pool.end
Calling `pool.end` will drain the pool of all active clients, disconnect them, and shut down any internal timers in the pool. It is common to call this at the end of a script using the pool or when your process is attempting to shut down cleanly.
```js
// again both promises and callbacks are supported:
import { Pool } from 'pg'
const pool = new Pool()
await pool.end()
```
## properties
`pool.totalCount: number`
The total number of clients existing within the pool.
`pool.idleCount: number`
The number of clients which are not checked out but are currently idle in the pool.
`pool.waitingCount: number`
The number of queued requests waiting on a client when all clients are checked out. It can be helpful to monitor this number to see if you need to adjust the size of the pool.
## events
`Pool` instances are also instances of [`EventEmitter`](https://nodejs.org/api/events.html).
Whenever the pool establishes a new client connection to the PostgreSQL backend it will emit the `connect` event with the newly connected client. This presents an opportunity for you to run setup commands on a client.
When a client is sitting idly in the pool it can still emit errors because it is connected to a live backend.
If the backend goes down or a network partition is encountered all the idle, connected clients in your application will emit an error _through_ the pool's error event emitter.
The error listener is passed the error as the first argument and the client upon which the error occurred as the 2nd argument. The client will be automatically terminated and removed from the pool, it is only passed to the error handler in case you want to inspect it.
<Alert>
<div>You probably want to add an event listener to the pool to catch background errors!</div>
Just like other event emitters, if a pool emits an <code>error</code> event and no listeners are added node will emit an
uncaught error and potentially crash your node process.
The `pg.Result` shape is returned for every successful query.
<div className="alert alert-info">note: you cannot instantiate this directly</div>
## properties
### `result.rows: Array<any>`
Every result will have a rows array. If no rows are returned the array will be empty. Otherwise the array will contain one item for each row returned from the query. By default node-postgres creates a map from the name to value of each column, giving you a json-like object back for each row.
### `result.fields: Array<FieldInfo>`
Every result will have a fields array. This array contains the `name` and `dataTypeID` of each field in the result. These fields are ordered in the same order as the columns if you are using `arrayMode` for the query:
```js
import pg from 'pg'
const { Pool } = pg
const pool = new Pool()
const client = await pool.connect()
const result = await client.query({
rowMode: 'array',
text: 'SELECT 1 as one, 2 as two;',
})
console.log(result.fields[0].name) // one
console.log(result.fields[1].name) // two
console.log(result.rows) // [ [ 1, 2 ] ]
await client.end()
```
### `result.command: string`
The command type last executed: `INSERT` `UPDATE` `CREATE` `SELECT` etc.
### `result.rowCount: int | null`
The number of rows processed by the last command. Can be `null` for commands that never affect rows, such as the `LOCK`-command. More specifically, some commands, including `LOCK`, only return a command tag of the form `COMMAND`, without any `[ROWS]`-field to parse. For such commands `rowCount` will be `null`.
_note: this does not reflect the number of rows __returned__ from a query. e.g. an update statement could update many rows (so high `result.rowCount` value) but `result.rows.length` would be zero. To check for an empty query response on a `SELECT` query use `result.rows.length === 0`_.
[@sehrope](https://github.com/brianc/node-postgres/issues/2182#issuecomment-620553915) has a good explanation:
The `rowCount` is populated from the command tag supplied by the PostgreSQL server. It's generally of the form: `COMMAND [OID] [ROWS]`
For DML commands (INSERT, UPDATE, etc), it reflects how many rows the server modified to process the command. For SELECT or COPY commands it reflects how many rows were retrieved or copied. More info on the specifics here: https://www.postgresql.org/docs/current/protocol-message-formats.html (search for CommandComplete for the message type)
The note in the docs about the difference is because that value is controlled by the server. It's possible for a non-standard server (ex: PostgreSQL fork) or a server version in the future to provide different information in some situations so it'd be best not to rely on it to assume that the rows array length matches the `rowCount`. It's fine to use it for DML counts though.
**Note**: When using an identifier that is the result of this function in an operation like `CREATE TABLE ${escapedIdentifier(identifier)}`, the table that is created will be CASE SENSITIVE. If you use any capital letters in the escaped identifier, you must always refer to the created table like `SELECT * from "MyCaseSensitiveTable"`; queries like `SELECT * FROM MyCaseSensitiveTable` will result in a "Non-existent table" error since case information is stripped from the query.
</Alert>
### pg.escapeLiteral
<Alert>
**Note**: Instead of manually escaping SQL literals, it is recommended to use parameterized queries. Refer to [parameterized queries](/features/queries#parameterized-query) and the [client.query](/apis/client#clientquery) API for more information.
</Alert>
Escapes a string as a [SQL literal](https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-CONSTANTS).
`async` / `await` is the preferred way to write async code these days with node, but callbacks are supported in the `pg` module and the `pg-pool` module. To use them, pass a callback function as the last argument to the following methods & it will be called and a promise will not be returned:
```js
const { Pool, Client } = require('pg')
// pool
const pool = new Pool()
// run a query on an available client
pool.query('SELECT NOW()', (err, res) => {
console.log(err, res)
})
// check out a client to do something more complex like a transaction
node-postgres uses the same [environment variables](https://www.postgresql.org/docs/9.1/static/libpq-envars.html) as libpq and psql to connect to a PostgreSQL server. Both individual clients & pools will use these environment variables. Here's a tiny program connecting node.js to the PostgreSQL server:
```js
import pg from 'pg'
const { Pool, Client } = pg
// pools will use environment variables
// for connection information
const pool = new Pool()
// you can also use async/await
const res = await pool.query('SELECT NOW()')
await pool.end()
// clients will also use environment variables
// for connection information
const client = new Client()
await client.connect()
const res = await client.query('SELECT NOW()')
await client.end()
```
To run the above program and specify which database to connect to we can invoke it like so:
```sh
$ PGUSER=dbuser \
PGPASSWORD=secretpassword \
PGHOST=database.server.com \
PGPORT=3211 \
PGDATABASE=mydb \
node script.js
```
This allows us to write our programs without having to specify connection information in the program and lets us reuse them to connect to different databases without having to modify the code.
The default values for the environment variables used are:
```
PGUSER=process.env.USER
PGPASSWORD=null
PGHOST=localhost
PGPORT=5432
PGDATABASE=process.env.USER
```
## Programmatic
node-postgres also supports configuring a pool or client programmatically with connection information. Here's our same script from above modified to use programmatic (hard-coded in this case) values. This can be useful if your application already has a way to manage config values or you don't want to use environment variables.
```js
import pg from 'pg'
const { Pool, Client } = pg
const pool = new Pool({
user: 'dbuser',
password: 'secretpassword',
host: 'database.server.com',
port: 3211,
database: 'mydb',
})
console.log(await pool.query('SELECT NOW()'))
const client = new Client({
user: 'dbuser',
password: 'secretpassword',
host: 'database.server.com',
port: 3211,
database: 'mydb',
})
await client.connect()
console.log(await client.query('SELECT NOW()'))
await client.end()
```
Many cloud providers include alternative methods for connecting to database instances using short-lived authentication tokens. node-postgres supports dynamic passwords via a callback function, either synchronous or asynchronous. The callback function must resolve to a string.
Connections to unix sockets can also be made. This can be useful on distros like Ubuntu, where authentication is managed via the socket connection instead of a password.
```js
import pg from 'pg'
const { Client } = pg
client = new Client({
user: 'username',
password: 'password',
host: '/cloudsql/myproject:zone:mydb',
database: 'database_name',
})
```
## Connection URI
You can initialize both a pool and a client with a connection string URI as well. This is common in environments like Heroku where the database connection string is supplied to your application dyno through an environment variable. Connection string parsing brought to you by [pg-connection-string](https://github.com/brianc/node-postgres/tree/master/packages/pg-connection-string).
As of v8.15.x node-postgres supporters the __ECMAScript Module__ (ESM) format. This means you can use `import` statements instead of `require` or `import pg from 'pg'`.
CommonJS modules are still supported. The ESM format is an opt-in feature and will not affect existing codebases that use CommonJS.
The docs have been changed to show ESM usage, but in a CommonJS context you can still use the same code, you just need to change the import format.
If you're using CommonJS, you can use the following code to import the `pg` module:
```js
const pg = require('pg')
const { Client } = pg
// etc...
```
### ESM Usage
If you're using ESM, you can use the following code to import the `pg` module:
```js
import { Client } from 'pg'
// etc...
```
Previously if you were using ESM you would have to use the following code:
Native bindings between node.js & [libpq](https://www.postgresql.org/docs/9.5/static/libpq.html) are provided by the [node-pg-native](https://github.com/brianc/node-pg-native) package. node-postgres can consume this package & use the native bindings to access the PostgreSQL server while giving you the same interface that is used with the JavaScript version of the library.
To use the native bindings first you'll need to install them:
```sh
$ npm install pg pg-native
```
Once `pg-native` is installed instead of requiring a `Client` or `Pool` constructor from `pg` you do the following:
```js
import pg from 'pg'
const { native } = pg
const { Client, Pool } = native
```
When you access the `.native` property on `'pg'` it will automatically require the `pg-native` package and wrap it in the same API.
<div class='alert alert-warning'>
Care has been taken to normalize between the two, but there might still be edge cases where things behave subtly differently due to the nature of using libpq over handling the binary protocol directly in JavaScript, so it's recommended you chose to either use the JavaScript driver or the native bindings both in development and production. For what its worth: I use the pure JavaScript driver because the JavaScript driver is more portable (doesn't need a compiler), and the pure JavaScript driver is <em>plenty</em> fast.
</div>
Some of the modules using advanced features of PostgreSQL such as [pg-query-stream](https://github.com/brianc/node-pg-query-stream), [pg-cursor](https://github.com/brianc/node-pg-cursor),and [pg-copy-streams](https://github.com/brianc/node-pg-copy-streams) need to operate directly on the binary stream and therefore are incompatible with the native bindings.
If you're working on a web application or other software which makes frequent queries you'll want to use a connection pool.
The easiest and by far most common way to use node-postgres is through a connection pool.
## Why?
- Connecting a new client to the PostgreSQL server requires a handshake which can take 20-30 milliseconds. During this time passwords are negotiated, SSL may be established, and configuration information is shared with the client & server. Incurring this cost _every time_ we want to execute a query would substantially slow down our application.
- The PostgreSQL server can only handle a [limited number of clients at a time](https://wiki.postgresql.org/wiki/Number_Of_Database_Connections). Depending on the available memory of your PostgreSQL server you may even crash the server if you connect an unbounded number of clients. _note: I have crashed a large production PostgreSQL server instance in RDS by opening new clients and never disconnecting them in a python application long ago. It was not fun._
- PostgreSQL can only process one query at a time on a single connected client in a first-in first-out manner. If your multi-tenant web application is using only a single connected client all queries among all simultaneous requests will be pipelined and executed serially, one after the other. No good!
### Good news
node-postgres ships with built-in connection pooling via the [pg-pool](/apis/pool) module.
## Examples
The client pool allows you to have a reusable pool of clients you can check out, use, and return. You generally want a limited number of these in your application and usually just 1. Creating an unbounded number of pools defeats the purpose of pooling at all.
### Checkout, use, and return
```js
import pg from 'pg'
const { Pool } = pg
const pool = new Pool()
// the pool will emit an error on behalf of any idle clients
// it contains if a backend error or network partition happens
pool.on('error', (err, client) => {
console.error('Unexpected error on idle client', err)
process.exit(-1)
})
const client = await pool.connect()
const res = await client.query('SELECT * FROM users WHERE id = $1', [1])
console.log(res.rows[0])
client.release()
```
<Alert>
<div>
You must <b>always</b> return the client to the pool if you successfully check it out, regardless of whether or not
there was an error with the queries you ran on the client.
</div>
If you don't release the client your application will leak them and eventually your pool will be empty forever and all
future requests to check out a client from the pool will wait forever.
</Alert>
### Single query
If you don't need a transaction or you just need to run a single query, the pool has a convenience method to run a query on any available client in the pool. This is the preferred way to query with node-postgres if you can as it removes the risk of leaking a client.
```js
import pg from 'pg'
const { Pool } = pg
const pool = new Pool()
const res = await pool.query('SELECT * FROM users WHERE id = $1', [1])
console.log('user:', res.rows[0])
```
### Shutdown
To shut down a pool call `pool.end()` on the pool. This will wait for all checked-out clients to be returned and then shut down all the clients and the pool timers.
```js
import pg from 'pg'
const { Pool } = pg
const pool = new Pool()
console.log('starting async query')
const result = await pool.query('SELECT NOW()')
console.log('async query finished')
console.log('starting callback query')
pool.query('SELECT NOW()', (err, res) => {
console.log('callback query finished')
})
console.log('calling end')
await pool.end()
console.log('pool has drained')
```
The output of the above will be:
```
starting async query
async query finished
starting callback query
calling end
callback query finished
pool has drained
```
<Info>
The pool will return errors when attempting to check out a client after you've called pool.end() on the pool.
For the sake of brevity I am using the `client.query` method instead of the `pool.query` method - both methods support the same API. In fact, `pool.query` delegates directly to `client.query` internally.
## Text only
If your query has no parameters you do not need to include them to the query method:
```js
await client.query('SELECT NOW() as now')
```
## Parameterized query
If you are passing parameters to your queries you will want to avoid string concatenating parameters into the query text directly. This can (and often does) lead to sql injection vulnerabilities. node-postgres supports parameterized queries, passing your query text _unaltered_ as well as your parameters to the PostgreSQL server where the parameters are safely substituted into the query with battle-tested parameter substitution code within the server itself.
```js
const text = 'INSERT INTO users(name, email) VALUES($1, $2) RETURNING *'
PostgreSQL does not support parameters for identifiers. If you need to have dynamic database, schema, table, or column names (e.g. in DDL statements) use [pg-format](https://www.npmjs.com/package/pg-format) package for handling escaping these values to ensure you do not have SQL injection!
</div>
Parameters passed as the second argument to `query()` will be converted to raw data types using the following rules:
**null and undefined**
If parameterizing `null` and `undefined` then both will be converted to `null`.
**Date**
Custom conversion to a UTC date string.
**Buffer**
Buffer instances are unchanged.
**Array**
Converted to a string that describes a Postgres array. Each array item is recursively converted using the rules described here.
**Object**
If a parameterized value has the method `toPostgres` then it will be called and its return value will be used in the query.
The signature of `toPostgres` is the following:
```
toPostgres (prepareValue: (value) => any): any
```
The `prepareValue` function provided can be used to convert nested types to raw data types suitable for the database.
Otherwise if no `toPostgres` method is defined then `JSON.stringify` is called on the parameterized value.
**Everything else**
All other parameterized values will be converted by calling `value.toString` on the value.
## Query config object
`pool.query` and `client.query` both support taking a config object as an argument instead of taking a string and optional array of parameters. The same example above could also be performed like so:
```js
const query = {
text: 'INSERT INTO users(name, email) VALUES($1, $2)',
values: ['brianc', 'brian.m.carlson@gmail.com'],
}
const res = await client.query(query)
console.log(res.rows[0])
```
The query config object allows for a few more advanced scenarios:
### Prepared statements
PostgreSQL has the concept of a [prepared statement](https://www.postgresql.org/docs/9.3/static/sql-prepare.html). node-postgres supports this by supplying a `name` parameter to the query config object. If you supply a `name` parameter the query execution plan will be cached on the PostgreSQL server on a **per connection basis**. This means if you use two different connections each will have to parse & plan the query once. node-postgres handles this transparently for you: a client only requests a query to be parsed the first time that particular client has seen that query name:
```js
const query = {
// give the query a unique name
name: 'fetch-user',
text: 'SELECT * FROM user WHERE id = $1',
values: [1],
}
const res = await client.query(query)
console.log(res.rows[0])
```
In the above example the first time the client sees a query with the name `'fetch-user'` it will send a 'parse' request to the PostgreSQL server & execute the query as normal. The second time, it will skip the 'parse' request and send the _name_ of the query to the PostgreSQL server.
<div class='message is-warning'>
<div class='message-body'>
Be careful not to fall into the trap of premature optimization. Most of your queries will likely not benefit much, if at all, from using prepared statements. This is a somewhat "power user" feature of PostgreSQL that is best used when you know how to use it - namely with very complex queries with lots of joins and advanced operations like union and switch statements. I rarely use this feature in my own apps unless writing complex aggregate queries for reports and I know the reports are going to be executed very frequently.
</div>
</div>
### Row mode
By default node-postgres reads rows and collects them into JavaScript objects with the keys matching the column names and the values matching the corresponding row value for each column. If you do not need or do not want this behavior you can pass `rowMode: 'array'` to a query object. This will inform the result parser to bypass collecting rows into a JavaScript object, and instead will return each row as an array of values.
```js
const query = {
text: 'SELECT $1::text as first_name, $2::text as last_name',
You can pass in a custom set of type parsers to use when parsing the results of a particular query. The `types` property must conform to the [Types](/apis/types) API. Here is an example in which every value is returned as a string:
node-postgres supports TLS/SSL connections to your PostgreSQL server as long as the server is configured to support it. When instantiating a pool or a client you can provide an `ssl` property on the config object and it will be passed to the constructor for the [node TLSSocket](https://nodejs.org/api/tls.html#tls_class_tls_tlssocket).
## Self-signed cert
Here's an example of a configuration you can use to connect a client or a pool to a PostgreSQL server.
```js
const config = {
database: 'database-name',
host: 'host-or-ip',
// this object will be passed to the TLSSocket constructor
If you plan to use a combination of a database connection string from the environment and SSL settings in the config object directly, then you must avoid including any of `sslcert`, `sslkey`, `sslrootcert`, or `sslmode` in the connection string. If any of these options are used then the `ssl` object is replaced and any additional options provided there will be lost.
To execute a transaction with node-postgres you simply execute `BEGIN / COMMIT / ROLLBACK` queries yourself through a client. Because node-postgres strives to be low level and un-opinionated, it doesn't provide any higher level abstractions specifically around transactions.
<Alert>
You <strong>must</strong> use the <em>same</em> client instance for all statements within a transaction. PostgreSQL
isolates a transaction to individual clients. This means if you initialize or use transactions with the{' '}
<span className="code">pool.query</span> method you <strong>will</strong> have problems. Do not use transactions with
the <span className="code">pool.query</span> method.
</Alert>
## Examples
```js
import { Pool } from 'pg'
const pool = new Pool()
const client = await pool.connect()
try {
await client.query('BEGIN')
const queryText = 'INSERT INTO users(name) VALUES($1) RETURNING id'
const res = await client.query(queryText, ['brianc'])
const insertPhotoText = 'INSERT INTO photos(user_id, photo_url) VALUES ($1, $2)'
PostgreSQL has a rich system of supported [data types](https://www.postgresql.org/docs/current/datatype.html). node-postgres does its best to support the most common data types out of the box and supplies an extensible type parser to allow for custom type serialization and parsing.
## strings by default
node-postgres will convert a database type to a JavaScript string if it doesn't have a registered type parser for the database type. Furthermore, you can send any type to the PostgreSQL server as a string and node-postgres will pass it through without modifying it in any way. To circumvent the type parsing completely do something like the following.
```js
const queryText = 'SELECT int_col::text, date_col::text, json_col::text FROM my_table'
const result = await client.query(queryText)
console.log(result.rows[0]) // will contain the unparsed string value of each column
```
## type parsing examples
### uuid + json / jsonb
There is no data type in JavaScript for a uuid/guid so node-postgres converts a uuid to a string. JavaScript has great support for JSON and node-postgres converts json/jsonb objects directly into their JavaScript object via [`JSON.parse`](https://github.com/brianc/node-pg-types/blob/master/lib/textParsers.js#L193). Likewise sending an object to the PostgreSQL server via a query from node-postgres, node-postgres will call [`JSON.stringify`](https://github.com/brianc/node-postgres/blob/e5f0e5d36a91a72dda93c74388ac890fa42b3be0/lib/utils.js#L47) on your outbound value, automatically converting it to json for the server.
await client.query('INSERT INTO users(data) VALUES($1)', [newUser])
const { rows } = await client.query('SELECT * FROM users')
console.log(rows)
/*
output:
[{
id: 'd70195fd-608e-42dc-b0f5-eee975a621e9',
data: { email: 'brian.m.carlson@gmail.com' }
}]
*/
```
### date / timestamp / timestamptz
node-postgres will convert instances of JavaScript date objects into the expected input value for your PostgreSQL server. Likewise, when reading a `date`, `timestamp`, or `timestamptz` column value back into JavaScript, node-postgres will parse the value into an instance of a JavaScript `Date` object.
node-postgres converts `DATE` and `TIMESTAMP` columns into the **local** time of the node process set at `process.env.TZ`.
_note: I generally use `TIMESTAMPTZ` when storing dates; otherwise, inserting a time from a process in one timezone and reading it out in a process in another timezone can cause unexpected differences in the time._
<Alert>
<div class="message-body">
Although PostgreSQL supports microseconds in dates, JavaScript only supports dates to the millisecond precision.
Keep this in mind when you send dates to and from PostgreSQL from node: your microseconds will be truncated when
converting to a JavaScript date object even if they exist in the database. If you need to preserve them, I recommend
My preferred way to use node-postgres (and all async code in node.js) is with `async/await`. I find it makes reasoning about control-flow easier and allows me to write more concise and maintainable code.
This is how I typically structure express web-applications with node-postgres to use `async/await`:
```
- app.js
- index.js
- routes/
- index.js
- photos.js
- user.js
- db/
- index.js <---thisiswhereIputdataaccesscode
```
That's the same structure I used in the [project structure](/guides/project-structure) example.
My `db/index.js` file usually starts out like this:
Then I will install [express-promise-router](https://www.npmjs.com/package/express-promise-router) and use it to define my routes. Here is my `routes/user.js` file:
```js
import Router from 'express-promise-router'
import db from '../db.js'
// create a new express-promise-router
// this has the same API as the normal express router except
// it allows you to use async functions as route handlers
const router = new Router()
// export our router to be mounted by the parent application
export default router
router.get('/:id', async (req, res) => {
const { id } = req.params
const { rows } = await db.query('SELECT * FROM users WHERE id = $1', [id])
res.send(rows[0])
})
```
Then in my `routes/index.js` file I'll have something like this which mounts each individual router into the main application:
```js
// ./routes/index.js
import users from './user.js'
import photos from './photos.js'
const mountRoutes = (app) => {
app.use('/users', users)
app.use('/photos', photos)
// etc..
}
export default mountRoutes
```
And finally in my `app.js` file where I bootstrap express I will have my `routes/index.js` file mount all my routes. The routes know they're using async functions but because of express-promise-router the main express app doesn't know and doesn't care!
```js
// ./app.js
import express from 'express'
import mountRoutes from './routes.js'
const app = express()
mountRoutes(app)
// ... more express setup stuff can follow
```
Now you've got `async/await`, node-postgres, and express all working together!
If you're using a [pool](/apis/pool) in an application with multiple instances of your service running (common in most cloud/container environments currently), you'll need to think a bit about the `max` parameter of your pool across all services and all _instances_ of all services which are connecting to your Postgres server.
This can get pretty complex depending on your cloud environment. Further nuance is introduced with things like pg-bouncer, RDS connection proxies, etc., which will do some forms of connection pooling and connection multiplexing. So, it's definitely worth thinking about. Let's run through a few setups. While certainly not exhaustive, these examples hopefully prompt you into thinking about what's right for your setup.
## Simple apps, dev mode, fixed instance counts, etc.
If your app isn't running in a k8s style env with containers scaling automatically or lambdas or cloud functions etc., you can do some "napkin math" for the `max` pool config you can use. Let's assume your Postgres instance is configured to have a maximum of 200 connections at any one time. You know your service is going to run on 4 instances. You can set the `max` pool size to 50, but if all your services are saturated waiting on database connections, you won't be able to connect to the database from any mgmt tools or scale up your services without changing config/code to adjust the max size.
In this situation, I'd probably set the `max` to 20 or 25. This lets you have plenty of headroom for scaling more instances and realistically, if your app is starved for db connections, you probably want to take a look at your queries and make them execute faster, or cache, or something else to reduce the load on the database. I worked on a more reporting-heavy application with limited users, but each running 5-6 queries at a time which all took 100-200 milliseconds to run. In that situation, I upped the `max` to 50. Typically, though, I don't bother setting it to anything other than the default of `10` as that's usually _fine_.
## Auto-scaling, cloud-functions, multi-tenancy, etc.
If the number of instances of your services which connect to your database is more dynamic and based on things like load, auto-scaling containers, or running in cloud-functions, you need to be a bit more thoughtful about what your max might be. Often in these environments, there will be another database pooling proxy in front of the database like pg-bouncer or the RDS-proxy, etc. I'm not sure how all these function exactly, and they all have some trade-offs, but let's assume you're not using a proxy. Then I'd be pretty cautious about how large you set any individual pool. If you're running an application under pretty serious load where you need dynamic scaling or lots of lambdas spinning up and sending queries, your queries are likely fast and you should be fine setting the `max` to a low value like 10 -- or just leave it alone, since `10` is the default.
## pg-bouncer, RDS-proxy, etc.
I'm not sure of all the pooling services for Postgres. I haven't used any myself. Throughout the years of working on `pg`, I've addressed issues caused by various proxies behaving differently than an actual Postgres backend. There are also gotchas with things like transactions. On the other hand, plenty of people run these with much success. In this situation, I would just recommend using some small but reasonable `max` value like the default value of `10` as it can still be helpful to keep a few TCP sockets from your services to the Postgres proxy open.
## Conclusion, tl;dr
It's a bit of a complicated topic and doesn't have much impact on things until you need to start scaling. At that point, your number of connections _still_ probably won't be your scaling bottleneck. It's worth thinking about a bit, but mostly I'd just leave the pool size to the default of `10` until you run into troubles: hopefully you never do!
Whenever I am writing a project & using node-postgres I like to create a file within it and make all interactions with the database go through this file. This serves a few purposes:
- Allows my project to adjust to any changes to the node-postgres API without having to trace down all the places I directly use node-postgres in my application.
- Allows me to have a single place to put logging and diagnostics around my database.
- Allows me to make custom extensions to my database access code & share it throughout the project.
- Allows a single place to bootstrap & configure the database.
## example
The location doesn't really matter - I've found it usually ends up being somewhat app specific and in line with whatever folder structure conventions you're using. For this example I'll use an express app structured like so:
```
- app.js
- index.js
- routes/
- index.js
- photos.js
- user.js
- db/
- index.js <---thisiswhereIputdataaccesscode
```
Typically I'll start out my `db/index.js` file like so:
```js
import { Pool } from 'pg'
const pool = new Pool()
export const query = (text, params) => {
return pool.query(text, params)
}
```
That's it. But now everywhere else in my application instead of requiring `pg` directly, I'll require this file. Here's an example of a route within `routes/user.js`:
```js
// notice here I'm requiring my database adapter file
// and not requiring node-postgres directly
import * as db from '../db/index.js'
app.get('/:id', async (req, res, next) => {
const result = await db.query('SELECT * FROM users WHERE id = $1', [req.params.id])
res.send(result.rows[0])
})
// ... many other routes in this file
```
Imagine we have lots of routes scattered throughout many files under our `routes/` directory. We now want to go back and log every single query that's executed, how long it took, and the number of rows it returned. If we had required node-postgres directly in every route file we'd have to go edit every single route - that would take forever & be really error prone! But thankfully we put our data access into `db/index.js`. Let's go add some logging:
That was pretty quick! And now all of our queries everywhere in our application are being logged.
_note: I didn't log the query parameters. Depending on your application you might be storing encrypted passwords or other sensitive information in your database. If you log your query parameters you might accidentally log sensitive information. Every app is different though so do what suits you best!_
Now what if we need to check out a client from the pool to run several queries in a row in a transaction? We can add another method to our `db/index.js` file when we need to do this:
Okay. Great - the simplest thing that could possibly work. It seems like one of our routes that checks out a client to run a transaction is forgetting to call `release` in some situation! Oh no! We are leaking a client & have hundreds of these routes to go audit. Good thing we have all our client access going through this single file. Lets add some deeper diagnostic information here to help us track down where the client leak is happening.
node-postgres at 8.0 introduces a breaking change to ssl-verified connections. If you connect with ssl and use
```
const client = new Client({ ssl: true })
```
and the server's SSL certificate is self-signed, connections will fail as of node-postgres 8.0. To keep the existing behavior, modify the invocation to
The rest of the changes are relatively minor and unlikely to cause issues; see [the announcement](/announcements#2020-02-25) for full details.
# Upgrading to 7.0
node-postgres at 7.0 introduces somewhat significant breaking changes to the public API.
## node version support
Starting with `pg@7.0` the earliest version of node supported will be `node@4.x LTS`. Support for `node@0.12.x` and `node@.10.x` is dropped, and the module wont work as it relies on new es6 features not available in older versions of node.
## pg singleton
In the past there was a singleton pool manager attached to the root `pg` object in the package. This singleton could be used to provision connection pools automatically by calling `pg.connect`. This API caused a lot of confusion for users. It also introduced a opaque module-managed singleton which was difficult to reason about, debug, error-prone, and inflexible. Starting in pg@6.0 the methods' documentation was removed, and starting in pg@6.3 the methods were deprecated with a warning message.
If your application still relies on these they will be _gone_ in `pg@7.0`. In order to migrate you can do the following:
```js
// old way, deprecated in 6.3.0:
// connection using global singleton
pg.connect(function (err, client, done) {
client.query(/* etc, etc */)
done()
})
// singleton pool shutdown
pg.end()
// ------------------
// new way, available since 6.0.0:
// create a pool
const pool = new pg.Pool()
// connection using created pool
pool.connect(function (err, client, done) {
client.query(/* etc, etc */)
done()
})
// pool shutdown
pool.end()
```
node-postgres ships with a built-in pool object provided by [pg-pool](https://github.com/brianc/node-pg-pool) which is already used internally by the `pg.connect` and `pg.end` methods. Migrating to a user-managed pool (or set of pools) allows you to more directly control their set up their life-cycle.
## client.query(...).on
Before `pg@7.0` the `client.query` method would _always_ return an instance of a query. The query instance was an event emitter, accepted a callback, and was also a promise. A few problems...
- too many flow control options on a single object was confusing
- event emitter `.on('error')` does not mix well with promise `.catch`
- the `row` event was a common source of errors: it looks like a stream but has no support for back-pressure, misleading users into trying to pipe results or handling them in the event emitter for a desired performance gain.
- error handling with a `.done` and `.error` emitter pair for every query is cumbersome and returning the emitter from `client.query` indicated this sort of pattern may be encouraged: it is not.
Starting with `pg@7.0` the return value `client.query` will be dependent on what you pass to the method: I think this aligns more with how most node libraries handle the callback/promise combo, and I hope it will make the "just works" :tm: feeling better while reducing surface area and surprises around event emitter / callback combos.
If you pass a callback to the method `client.query` will return `undefined`. This limits flow control to the callback which is in-line with almost all of node's core APIs.
### client.query without a callback
```js
const query = client.query('SELECT NOW()')
assert(query instanceof Promise) // true
assert(query.on === undefined) // true
query.then((res) => /* etc, etc */)
```
If you do **not** pass a callback `client.query` will return an instance of a `Promise`. This will **not** be a query instance and will not be an event emitter. This is in line with how most promise-based APIs work in node.
### client.query(Submittable)
`client.query` has always accepted any object that has a `.submit` method on it. In this scenario the client calls `.submit` on the object, delegating execution responsibility to it. In this situation the client also **returns the instance it was passed**. This is how [pg-cursor](https://github.com/brianc/node-pg-cursor) and [pg-query-stream](https://github.com/brianc/node-pg-query-stream) work. So, if you need the event emitter functionality on your queries for some reason, it is still possible because `Query` is an instance of `Submittable`:
`Query` is considered a public, documented part of the API of node-postgres and this form will be supported indefinitely.
_note: I have been building apps with node-postgres for almost 7 years. In that time I have never used the event emitter API as the primary way to execute queries. I used to use callbacks and now I use async/await. If you need to stream results I highly recommend you use [pg-cursor](https://github.com/brianc/node-pg-cursor) or [pg-query-stream](https://github.com/brianc/node-pg-query-stream) and **not** the query object as an event emitter._
node-postgres is a collection of node.js modules for interfacing with your PostgreSQL database. It has support for callbacks, promises, async/await, connection pooling, prepared statements, cursors, streaming results, C/C++ bindings, rich type parsing, and more! Just like PostgreSQL itself there are a lot of features: this documentation aims to get you up and running quickly and in the right direction. It also tries to provide guides for more advanced & edge-case topics allowing you to tap into the full power of PostgreSQL from node.js.
## Install
```bash
$ npm install pg
```
## Supporters
node-postgres continued development and support is made possible by the many [supporters](https://github.com/brianc/node-postgres/blob/master/SPONSORS.md).
Special thanks to [Medplum](https://www.medplum.com/) for sponsoring node-postgres for a whole year!
If you or your company would like to sponsor node-postgres stop by [GitHub Sponsors](https://github.com/sponsors/brianc) and sign up or feel free to [email me](mailto:brian@pecanware.com) if you want to add your logo to the documentation or discuss higher tiers of sponsorship!
# Version compatibility
node-postgres strives to be compatible with all recent LTS versions of node & the most recent "stable" version. At the time of this writing node-postgres is compatible with node 18.x, 20.x, 22.x, and 24.x.
## Getting started
The simplest possible way to connect, query, and disconnect is with async/await:
```js
import { Client } from 'pg'
const client = new Client()
await client.connect()
const res = await client.query('SELECT $1::text as message', ['Hello world!'])
console.log(res.rows[0].message) // Hello world!
await client.end()
```
### Error Handling
For the sake of simplicity, these docs will assume that the methods are successful. In real life use, make sure to properly handle errors thrown in the methods. A `try/catch` block is a great way to do so:
```ts
import { Client } from 'pg'
const client = new Client()
await client.connect()
try {
const res = await client.query('SELECT $1::text as message', ['Hello world!'])
console.log(res.rows[0].message) // Hello world!
} catch (err) {
console.error(err);
} finally {
await client.end()
}
```
### Pooling
In most applications you'll want to use a [connection pool](/features/pooling) to manage your connections. This is a more advanced topic, but here's a simple example of how to use it:
```js
import { Pool } from 'pg'
const pool = new Pool()
const res = await pool.query('SELECT $1::text as message', ['Hello world!'])
console.log(res.rows[0].message) // Hello world!
```
Our real-world apps are almost always more complicated than that, and I urge you to read on!
`pg-cloudflare` makes it easier to take an existing package that relies on `tls` and `net`, and make it work in environments where only `connect()` is supported, such as Cloudflare Workers.
`pg-cloudflare` wraps `connect()`, the [TCP Socket API](https://github.com/wintercg/proposal-sockets-api) proposed within WinterCG, and implemented in [Cloudflare Workers](https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/), and exposes an interface with methods similar to what the `net` and `tls` modules in Node.js expose. (ex: `net.connect(path[, options][, callback])`). This minimizes the number of changes needed in order to make an existing package work across JavaScript runtimes.
## Installation
```
npm i --save-dev pg-cloudflare
```
The package uses conditional exports to support bundlers that don't know about
`cloudflare:sockets`, so the consumer code by default imports an empty file. To
enable the package, resolve to the `cloudflare` condition in your bundler's
config. For example:
- `webpack.config.js`
```js
export default {
...,
resolve: { conditionNames: [..., "workerd"] },
plugins: [
// ignore cloudflare:sockets imports
new webpack.IgnorePlugin({
resourceRegExp: /^cloudflare:sockets$/,
}),
],
}
```
- `vite.config.js`
> [!NOTE]
> If you are using the [Cloudflare Vite plugin](https://www.npmjs.com/package/@cloudflare/vite-plugin) then the following configuration is not necessary.
The concrete examples can be found in `packages/pg-bundler-test`.
## How to use conditionally, in non-Node.js environments
As implemented in `pg` [here](https://github.com/brianc/node-postgres/commit/07553428e9c0eacf761a5d4541a3300ff7859578#diff-34588ad868ebcb232660aba7ee6a99d1e02f4bc93f73497d2688c3f074e60533R5-R13), a typical use case might look as follows, where in a Node.js environment the `net` module is used, while in a non-Node.js environment, where `net` is unavailable, `pg-cloudflare` is used instead, providing an equivalent interface:
```js
module.exports.getStream = function getStream(ssl = false) {
## Node.js implementation of the Socket API proposal
If you're looking for a way to rely on `connect()` as the interface you use to interact with raw sockets, but need this interface to be available in a Node.js environment, [`@arrowood.dev/socket`](https://github.com/Ethan-Arrowood/socket) provides a Node.js implementation of the Socket API.
### license
The MIT License (MIT)
Copyright (c) 2023 Brian M. Carlson
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.