mirror of
https://github.com/arthurfiorette/axios-cache-interceptor.git
synced 2026-01-25 14:08:29 +00:00
Fix memory leak in axios.waiting and improve storage handling (#1140)
* Initial plan * Fix axios.waiting memory leak when maxEntries is set - Added cleanup of waiting map entries in response interceptor when cache state is not 'loading' - This prevents memory leaks when entries are evicted from storage due to maxEntries limit - Added comprehensive tests to verify the fix - Updated existing test to reflect correct behavior Co-authored-by: arthurfiorette <47537704+arthurfiorette@users.noreply.github.com> * Remove pnpm-workspace.yaml build artifact and add to gitignore Co-authored-by: arthurfiorette <47537704+arthurfiorette@users.noreply.github.com> * Refactor: Extract resolveWaiting helper to reduce code duplication - Created resolveWaiting helper function to centralize the logic for resolving and cleaning up waiting entries - Simplified test error message to avoid redundancy - This improves code maintainability by avoiding duplication Co-authored-by: arthurfiorette <47537704+arthurfiorette@users.noreply.github.com> * Implement timeout-based cleanup for waiting entries - Add timeout mechanism to automatically clean up waiting entries after axios timeout period - Use axios config timeout or default to 30 seconds if not configured - This prevents memory leaks when entries are evicted from storage before response completes - Updated tests to use short timeouts and wait for cleanup to complete Note: This implements the approach requested by @arthurfiorette to use axios timeout instead of immediately resolving/rejecting waiting entries when cache state is not loading. Co-authored-by: arthurfiorette <47537704+arthurfiorette@users.noreply.github.com> * Address PR feedback: Add configurable waitingTimeout and improve implementation - Add waitingTimeout property to CacheProperties for configurable timeout - Extract timeout logic to waiting-timeout.ts utility function - Store request ID outside timeout callback to allow earlier GC - Add unref() to timeout in Node.js to prevent keeping process alive - Fix timeout logic to handle axios default timeout of 0 - Only create timeout when explicitly configured or config.timeout > 0 Changes address comments 2598940999, 2598944866, and 2598947753. Co-authored-by: arthurfiorette <47537704+arthurfiorette@users.noreply.github.com> * Simplify configuration: Remove waitingTimeout property, use config.timeout - Removed waitingTimeout from CacheProperties (too much configuration) - Moved timeout logic into createWaitingTimeout utility function - Use config.timeout or axios.defaults.timeout for timeout value - Only add finally handler if timeout was actually created - Updated tests to use axios.defaults.timeout instead Addresses comments 2599033372, 2599040440, and 2599040616. Co-authored-by: arthurfiorette <47537704+arthurfiorette@users.noreply.github.com> * Code cleanup: simplify comment and remove unnecessary variable - Simplified comment on line 165 to explain WHY (prevent memory leaks) instead of WHAT - Removed unnecessary requestId variable, passing config.id directly to createWaitingTimeout Addresses feedback on lines 165 and 169. Co-authored-by: arthurfiorette <47537704+arthurfiorette@users.noreply.github.com> * Remove redundant condition check in waiting-timeout The condition `timeout <= 0` is redundant since the logic above (lines 30-35) already ensures that timeout is either a positive number or undefined. Addresses comment 2599097978. Co-authored-by: arthurfiorette <47537704+arthurfiorette@users.noreply.github.com> * changes * Fix maxEntries eviction logic, web storage quota handling, and update docs **Code improvements:** - Fixed memory storage maxEntries logic: now properly evicts when at capacity - Fixed web storage isDomQuotaExceededError to work with jsdom's DOMException - Added duck-typing check for DOMException to handle cross-context scenarios **Documentation updates:** - Added maxStaleAge parameter documentation for both memory and web storage - Documented that memory storage now uses Map internally - Updated default values in code examples (cleanupInterval: 5min, maxEntries: 1024) - Added options section for web storage explaining all parameters **Test fixes:** - All 150 tests now passing - Fixed web storage quota tests that were failing due to DOMException instanceof check Addresses @arthurfiorette's request to review/improve code, update documentation, and fix tests. Co-authored-by: arthurfiorette <47537704+arthurfiorette@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: arthurfiorette <47537704+arthurfiorette@users.noreply.github.com> Co-authored-by: Arthur Fiorette <me@arthur.place>
This commit is contained in:
parent
6bda1baae0
commit
5a8cc53cdd
@ -34,7 +34,11 @@ clone both ways, on `set()` and on `get()`. _Just like
|
||||
others._
|
||||
|
||||
For long running processes, you can avoid memory leaks by using playing with the
|
||||
`cleanupInterval` option. And can reduce memory usage with `maxEntries`.
|
||||
`cleanupInterval` option. And can reduce memory usage with `maxEntries`. The `maxStaleAge`
|
||||
parameter helps prevent stale entries from accumulating indefinitely.
|
||||
|
||||
The storage uses a JavaScript `Map` internally for efficient key-value lookups and
|
||||
iteration.
|
||||
|
||||
```ts
|
||||
import Axios from 'axios';
|
||||
@ -44,8 +48,9 @@ setupCache(axios, {
|
||||
// You don't need to to that, as it is the default option.
|
||||
storage: buildMemoryStorage(
|
||||
/* cloneData default=*/ false,
|
||||
/* cleanupInterval default=*/ false,
|
||||
/* maxEntries default=*/ false
|
||||
/* cleanupInterval default=*/ 5 * 60 * 1000,
|
||||
/* maxEntries default=*/ 1024,
|
||||
/* maxStaleAge default=*/ 60 * 60 * 1000
|
||||
)
|
||||
});
|
||||
```
|
||||
@ -57,11 +62,16 @@ Options:
|
||||
before saving value in storage using `set()`. Disabled is default
|
||||
|
||||
- **cleanupInterval**: The interval in milliseconds to run a setInterval job of cleaning
|
||||
old entries. If false, the job will not be created. Disabled is default
|
||||
old entries. If false, the job will not be created. 5 minutes (300_000) is default
|
||||
|
||||
- **maxEntries**: The maximum number of entries to keep in the storage. Its hard to
|
||||
determine the size of the entries, so a smart FIFO order is used to determine eviction.
|
||||
If false, no check will be done and you may grow up memory usage. Disabled is default
|
||||
If false, no check will be done and you may grow up memory usage. 1024 is default
|
||||
|
||||
- **maxStaleAge**: The maximum age in milliseconds a stale entry can stay in the storage
|
||||
before being removed. This prevents stale-able entries (those with ETag or Last-Modified
|
||||
headers) from staying indefinitely and causing memory leaks. 1 hour (3_600_000) is
|
||||
default
|
||||
|
||||
## Web Storage API
|
||||
|
||||
@ -97,15 +107,27 @@ setupCache(axios, { // [!code focus:5]
|
||||
import Axios from 'axios';
|
||||
import { setupCache, buildWebStorage } from 'axios-cache-interceptor';
|
||||
|
||||
const myStorage = new Storage(); // [!code focus:5]
|
||||
const myStorage = new Storage(); // [!code focus:8]
|
||||
|
||||
setupCache(axios, {
|
||||
storage: buildWebStorage(myStorage)
|
||||
storage: buildWebStorage(
|
||||
myStorage,
|
||||
'axios-cache:', // prefix
|
||||
60 * 60 * 1000 // maxStaleAge (1 hour default)
|
||||
)
|
||||
});
|
||||
```
|
||||
|
||||
:::
|
||||
|
||||
Options:
|
||||
|
||||
- **storage**: The Storage instance to use (e.g., `localStorage`, `sessionStorage`)
|
||||
- **prefix**: The prefix to add to all keys to avoid collisions. Default is `'axios-cache-'`
|
||||
- **maxStaleAge**: The maximum age in milliseconds a stale entry can stay in the storage
|
||||
before being removed. Prevents memory leaks from stale-able entries. Default is 1 hour
|
||||
(3_600_000)
|
||||
|
||||
### Browser quota
|
||||
|
||||
From `v0.9.0` onwards, web storage is able to detect and evict older entries if the
|
||||
@ -192,9 +214,8 @@ const redisStorage = buildStorage({
|
||||
(value.state === 'cached' && !canStale(value))
|
||||
?
|
||||
value.createdAt + value.ttl!
|
||||
: // otherwise, we can't determine when it should expire, so we keep
|
||||
// it indefinitely.
|
||||
undefined
|
||||
: // otherwise, we can't determine when it should expire, so we keep to up an hour.
|
||||
Date.now() + 60 * 60 * 1000
|
||||
});
|
||||
},
|
||||
|
||||
|
||||
@ -58,7 +58,8 @@
|
||||
"dependencies": {
|
||||
"cache-parser": "1.2.6",
|
||||
"fast-defer": "1.1.9",
|
||||
"object-code": "1.3.4"
|
||||
"object-code": "1.3.4",
|
||||
"try": "^1.0.2"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@arthurfiorette/biomejs-config": "2.0.1",
|
||||
|
||||
8
pnpm-lock.yaml
generated
8
pnpm-lock.yaml
generated
@ -17,6 +17,9 @@ importers:
|
||||
object-code:
|
||||
specifier: 1.3.4
|
||||
version: 1.3.4
|
||||
try:
|
||||
specifier: ^1.0.2
|
||||
version: 1.0.2
|
||||
devDependencies:
|
||||
'@arthurfiorette/biomejs-config':
|
||||
specifier: 2.0.1
|
||||
@ -3589,6 +3592,9 @@ packages:
|
||||
trough@2.2.0:
|
||||
resolution: {integrity: sha512-tmMpK00BjZiUyVyvrBK7knerNgmgvcV/KLVyuma/SC+TQN167GrMRciANTz09+k3zW8L8t60jWO1GpfkZdjTaw==}
|
||||
|
||||
try@1.0.2:
|
||||
resolution: {integrity: sha512-K30PGZQcpGNlwQaKPfKL4CkKY1LzRrxapVgtayJhpMNi6/w9Cr3Hze2m+DMUQnn12MOhm2piAoAJltfcvL1EOw==}
|
||||
|
||||
tslib@2.8.1:
|
||||
resolution: {integrity: sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==}
|
||||
|
||||
@ -7816,6 +7822,8 @@ snapshots:
|
||||
|
||||
trough@2.2.0: {}
|
||||
|
||||
try@1.0.2: {}
|
||||
|
||||
tslib@2.8.1: {}
|
||||
|
||||
typed-array-buffer@1.0.3:
|
||||
|
||||
4
pnpm-workspace.yaml
Normal file
4
pnpm-workspace.yaml
Normal file
@ -0,0 +1,4 @@
|
||||
ignoredBuiltDependencies:
|
||||
- '@swc/core'
|
||||
- esbuild
|
||||
- oxc-resolver
|
||||
@ -113,6 +113,10 @@ export function defaultResponseInterceptor(axios: AxiosCacheInstance): ResponseI
|
||||
});
|
||||
}
|
||||
|
||||
// On limited storage scenarios, its possible the request was evicted while waiting
|
||||
// for the response, in this case, state will be 'empty' again instead of loading.
|
||||
// https://github.com/arthurfiorette/axios-cache-interceptor/issues/833
|
||||
axios.waiting.delete(response.id);
|
||||
return response;
|
||||
}
|
||||
|
||||
|
||||
@ -52,6 +52,56 @@ export function isExpired(value: CachedStorageValue | StaleStorageValue): boolea
|
||||
return value.ttl !== undefined && value.createdAt + value.ttl <= Date.now();
|
||||
}
|
||||
|
||||
/**
|
||||
* Defines which storage states are evicted first when cleaning up the storage.
|
||||
*/
|
||||
const StateEvictionOrder: Record<StorageValue['state'], number> = {
|
||||
empty: 0,
|
||||
'must-revalidate': 1,
|
||||
stale: 2,
|
||||
cached: 3,
|
||||
// loading states usually don't have any data and are the most important ones
|
||||
// to keep around
|
||||
loading: 4
|
||||
};
|
||||
|
||||
/**
|
||||
* Is a comparator function that sorts storage entries by their eviction priority
|
||||
* and, in the same group, by older first.
|
||||
*/
|
||||
export function storageEntriesSorter(
|
||||
[, a]: [string, StorageValue],
|
||||
[, b]: [string, StorageValue]
|
||||
): number {
|
||||
const stateDiff = StateEvictionOrder[a.state] - StateEvictionOrder[b.state];
|
||||
if (stateDiff !== 0) return stateDiff;
|
||||
return (a.createdAt || 0) - (b.createdAt || 0);
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns true if the storage entry can be removed according to its state and the
|
||||
* provided maxStaleAge.
|
||||
*/
|
||||
export function canRemoveStorageEntry(value: StorageValue, maxStaleAge: number): boolean {
|
||||
switch (value.state) {
|
||||
case 'loading':
|
||||
return false;
|
||||
|
||||
case 'empty':
|
||||
case 'must-revalidate':
|
||||
return true;
|
||||
|
||||
case 'cached':
|
||||
return isExpired(value) && !canStale(value);
|
||||
|
||||
case 'stale':
|
||||
if (maxStaleAge !== undefined && value.ttl !== undefined) {
|
||||
return Date.now() > value.createdAt + value.ttl + maxStaleAge;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
export interface BuildStorage extends Omit<AxiosStorage, 'get'> {
|
||||
/**
|
||||
* Returns the value for the given key. This method does not have to make checks for
|
||||
@ -74,7 +124,7 @@ export interface BuildStorage extends Omit<AxiosStorage, 'get'> {
|
||||
* The exported `buildStorage` function abstracts the storage interface and requires a
|
||||
* super simple object to build the storage.
|
||||
*
|
||||
* **Note**: You can only create an custom storage with this function.
|
||||
* **Note**: You can only create custom storages with this function.
|
||||
*
|
||||
* @example
|
||||
*
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import { buildStorage, canStale, isExpired } from './build.js';
|
||||
import { buildStorage, canRemoveStorageEntry, storageEntriesSorter } from './build.js';
|
||||
import type { AxiosStorage, StorageValue } from './types.js';
|
||||
|
||||
/* c8 ignore start */
|
||||
@ -41,100 +41,89 @@ const clone: <T>(value: T) => T =
|
||||
* default
|
||||
* @param {number | false} cleanupInterval The interval in milliseconds to run a
|
||||
* setInterval job of cleaning old entries. If false, the job will not be created.
|
||||
* Disabled is default
|
||||
* 5 minutes (300_000) is default
|
||||
* @param {number | false} maxEntries The maximum number of entries to keep in the
|
||||
* storage. Its hard to determine the size of the entries, so a smart FIFO order is used
|
||||
* to determine eviction. If false, no check will be done and you may grow up memory
|
||||
* usage. Disabled is default
|
||||
* usage. 1024 is default
|
||||
* @param {number} maxStaleAge The maximum age in milliseconds a stale entry can stay
|
||||
* in the storage before being removed. Otherwise, stale-able entries would stay
|
||||
* indefinitely causing a memory leak eventually. 1 hour (3_600_000) is default
|
||||
*/
|
||||
export function buildMemoryStorage(
|
||||
cloneData: boolean | 'double' = false,
|
||||
cleanupInterval: number | false = false,
|
||||
maxEntries: number | false = false
|
||||
cleanupInterval: number | false = 5 * 60 * 1000,
|
||||
maxEntries: number | false = 1024,
|
||||
maxStaleAge: number = 60 * 60 * 1000
|
||||
) {
|
||||
function sortedEntries() {
|
||||
return Array.from(storage.data.entries()).sort(storageEntriesSorter);
|
||||
}
|
||||
|
||||
const storage = buildStorage({
|
||||
set: (key, value) => {
|
||||
if (maxEntries) {
|
||||
let keys = Object.keys(storage.data);
|
||||
// More entries than allowed, evict oldest ones
|
||||
if (maxEntries && storage.data.size >= maxEntries) {
|
||||
storage.cleanup();
|
||||
|
||||
// Tries to cleanup first
|
||||
if (keys.length >= maxEntries) {
|
||||
storage.cleanup();
|
||||
// After cleanup, if still at or over capacity, manually evict entries
|
||||
if (storage.data.size >= maxEntries) {
|
||||
for (const [key] of sortedEntries()) {
|
||||
storage.data.delete(key);
|
||||
|
||||
// Recalculates the keys
|
||||
keys = Object.keys(storage.data);
|
||||
|
||||
// Keeps deleting until there's space
|
||||
while (keys.length >= maxEntries) {
|
||||
// There's always at least one key here, otherwise it would not be
|
||||
// in the loop.
|
||||
|
||||
delete storage.data[keys.shift()!];
|
||||
if (storage.data.size < maxEntries) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Clone the value before storing to prevent future mutations
|
||||
// from affecting cached data.
|
||||
storage.data[key] = cloneData === 'double' ? clone(value) : value;
|
||||
storage.data.set(key, cloneData === 'double' ? clone(value) : value);
|
||||
},
|
||||
|
||||
remove: (key) => {
|
||||
delete storage.data[key];
|
||||
storage.data.delete(key);
|
||||
},
|
||||
|
||||
find: (key) => {
|
||||
const value = storage.data[key];
|
||||
|
||||
const value = storage.data.get(key);
|
||||
return cloneData && value !== undefined ? clone(value) : value;
|
||||
},
|
||||
|
||||
clear: () => {
|
||||
storage.data = Object.create(null);
|
||||
storage.data.clear();
|
||||
}
|
||||
}) as MemoryStorage;
|
||||
|
||||
storage.data = Object.create(null) as Record<string, StorageValue>;
|
||||
storage.data = new Map();
|
||||
|
||||
// When this program gets running for more than the specified interval, there's a good
|
||||
// chance of it being a long-running process or at least have a lot of entries. Therefore,
|
||||
// "faster" loop is more important than code readability.
|
||||
storage.cleanup = () => {
|
||||
const keys = Object.keys(storage.data);
|
||||
|
||||
let i = -1;
|
||||
let value: StorageValue;
|
||||
let key: string;
|
||||
|
||||
// Looping forward, as older entries are more likely to be expired
|
||||
// than newer ones.
|
||||
while (++i < keys.length) {
|
||||
key = keys[i]!;
|
||||
value = storage.data[key]!;
|
||||
|
||||
if (value.state === 'empty') {
|
||||
storage.remove(key);
|
||||
continue;
|
||||
}
|
||||
|
||||
// If the value is expired and can't be stale, remove it
|
||||
if (value.state === 'cached' && isExpired(value) && !canStale(value)) {
|
||||
// this storage returns void.
|
||||
|
||||
storage.remove(key);
|
||||
for (const [key, value] of sortedEntries()) {
|
||||
if (canRemoveStorageEntry(value, maxStaleAge)) {
|
||||
storage.data.delete(key);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
if (cleanupInterval) {
|
||||
storage.cleaner = setInterval(storage.cleanup, cleanupInterval);
|
||||
|
||||
// Attempt to unref the interval to not block Node.js from exiting
|
||||
if (typeof storage.cleaner === 'object' && 'unref' in storage.cleaner) {
|
||||
storage.cleaner.unref();
|
||||
}
|
||||
}
|
||||
|
||||
return storage;
|
||||
}
|
||||
|
||||
export interface MemoryStorage extends AxiosStorage {
|
||||
data: Record<string, StorageValue>;
|
||||
data: Map<string, StorageValue>;
|
||||
/** The job responsible to cleaning old entries */
|
||||
cleaner: ReturnType<typeof setInterval>;
|
||||
/** Tries to remove any invalid entry from the memory */
|
||||
|
||||
@ -1,4 +1,5 @@
|
||||
import { buildStorage, canStale, isExpired } from './build.js';
|
||||
import { Result } from 'try';
|
||||
import { buildStorage, canRemoveStorageEntry } from './build.js';
|
||||
import type { StorageValue } from './types.js';
|
||||
|
||||
/**
|
||||
@ -20,8 +21,19 @@ import type { StorageValue } from './types.js';
|
||||
* @param storage The type of web storage to use. localStorage or sessionStorage.
|
||||
* @param prefix The prefix to index the storage. Useful to prevent collision between
|
||||
* multiple places using the same storage.
|
||||
* @param {number} maxStaleAge The maximum age in milliseconds a stale entry can stay
|
||||
* in the storage before being removed. Otherwise, stale-able entries would stay
|
||||
* indefinitely causing a memory leak eventually. 1 hour (3_600_000) is default
|
||||
*/
|
||||
export function buildWebStorage(storage: Storage, prefix = 'axios-cache-') {
|
||||
export function buildWebStorage(
|
||||
storage: Storage,
|
||||
prefix = 'axios-cache-',
|
||||
maxStaleAge: number = 60 * 60 * 1000
|
||||
) {
|
||||
function save(key: string, value: StorageValue) {
|
||||
storage.setItem(prefix + key, JSON.stringify(value));
|
||||
}
|
||||
|
||||
return buildStorage({
|
||||
clear: () => {
|
||||
for (const key in storage) {
|
||||
@ -41,49 +53,81 @@ export function buildWebStorage(storage: Storage, prefix = 'axios-cache-') {
|
||||
},
|
||||
|
||||
set: (key, value) => {
|
||||
const save = () => storage.setItem(prefix + key, JSON.stringify(value));
|
||||
const result = Result.try(save, key, value);
|
||||
|
||||
try {
|
||||
return save();
|
||||
} catch {
|
||||
const allValues: [string, StorageValue][] = Object.entries(
|
||||
storage as Record<string, string>
|
||||
)
|
||||
.filter((item) => item[0].startsWith(prefix))
|
||||
.map((item) => [item[0], JSON.parse(item[1]) as StorageValue]);
|
||||
|
||||
// Remove all expired values
|
||||
for (const value of allValues) {
|
||||
if (value[1].state === 'cached' && isExpired(value[1]) && !canStale(value[1])) {
|
||||
storage.removeItem(value[0]);
|
||||
}
|
||||
}
|
||||
|
||||
// Try save again after removing expired values
|
||||
try {
|
||||
return save();
|
||||
} catch {
|
||||
// Storage still full, try removing the oldest value until it can be saved
|
||||
|
||||
// Descending sort by createdAt
|
||||
const sortedItems = allValues.sort(
|
||||
(a, b) => (a[1].createdAt || 0) - (b[1].createdAt || 0)
|
||||
);
|
||||
|
||||
for (const item of sortedItems) {
|
||||
storage.removeItem(item[0]);
|
||||
|
||||
try {
|
||||
return save();
|
||||
} catch {
|
||||
// This key didn't free all the required space
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Clear the cache for the specified key
|
||||
storage.removeItem(prefix + key);
|
||||
if (result.ok) {
|
||||
return;
|
||||
}
|
||||
|
||||
// we cannot hide non quota errors
|
||||
if (!isDomQuotaExceededError(result.error)) {
|
||||
throw result.error;
|
||||
}
|
||||
|
||||
const allValues: [string, StorageValue][] = Object.entries(storage as Record<string, string>)
|
||||
.filter(([key]) => key.startsWith(prefix))
|
||||
.map(([key, value]) => [key, JSON.parse(value) as StorageValue]);
|
||||
|
||||
// Remove all expired values
|
||||
for (const [key, value] of allValues) {
|
||||
if (canRemoveStorageEntry(value, maxStaleAge)) {
|
||||
storage.removeItem(key);
|
||||
}
|
||||
}
|
||||
|
||||
// Try save again after removing expired values
|
||||
const retry = Result.try(save, key, value);
|
||||
|
||||
if (retry.ok) {
|
||||
return;
|
||||
}
|
||||
|
||||
// we cannot hide non quota errors
|
||||
if (!isDomQuotaExceededError(retry.error)) {
|
||||
throw retry.error;
|
||||
}
|
||||
|
||||
// Storage still full, try removing the oldest value until it can be saved
|
||||
|
||||
const descItems = allValues.sort((a, b) => (a[1].createdAt || 0) - (b[1].createdAt || 0));
|
||||
|
||||
// Keep looping until all items are removed or the save works
|
||||
for (const item of descItems) {
|
||||
storage.removeItem(item[0]);
|
||||
|
||||
const lastTry = Result.try(save, key, value);
|
||||
|
||||
if (lastTry.ok) {
|
||||
return;
|
||||
}
|
||||
|
||||
// we cannot hide non quota errors
|
||||
if (!isDomQuotaExceededError(lastTry.error)) {
|
||||
throw lastTry.error;
|
||||
}
|
||||
}
|
||||
|
||||
// Could not save even after removing all items, just ignore since its
|
||||
// a storage quota issue.
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
function isDomQuotaExceededError(error: unknown): boolean {
|
||||
// Check if it's a DOMException by duck-typing (works across different DOMException implementations)
|
||||
const isDOMException =
|
||||
error instanceof DOMException ||
|
||||
(typeof error === 'object' &&
|
||||
error !== null &&
|
||||
'name' in error &&
|
||||
error.constructor?.name === 'DOMException');
|
||||
|
||||
return (
|
||||
isDOMException &&
|
||||
// https://stackoverflow.com/a/23375082
|
||||
'name' in (error as any) &&
|
||||
((error as any).name === 'QuotaExceededError' ||
|
||||
(error as any).name === 'NS_ERROR_DOM_QUOTA_REACHED' ||
|
||||
(error as any).name === 'QUOTA_EXCEEDED_ERR')
|
||||
);
|
||||
}
|
||||
|
||||
@ -225,10 +225,10 @@ describe('Request Interceptor', () => {
|
||||
});
|
||||
|
||||
// Expects that the cache is empty (deleted above) and
|
||||
// it still has a waiting entry.
|
||||
// it has no waiting entries (prevent memory leaks).
|
||||
const { state } = await axios.storage.get(ID);
|
||||
assert.equal(state, 'empty');
|
||||
assert.ok(axios.waiting.get(ID));
|
||||
assert.equal(axios.waiting.get(ID), undefined);
|
||||
|
||||
// This line should throw an error if this bug isn't fixed.
|
||||
await axios.get('url', { id: ID });
|
||||
|
||||
87
test/interceptors/waiting-memory-leak.test.ts
Normal file
87
test/interceptors/waiting-memory-leak.test.ts
Normal file
@ -0,0 +1,87 @@
|
||||
import assert from 'node:assert';
|
||||
import { describe, it } from 'node:test';
|
||||
import { buildMemoryStorage } from '../../src/storage/memory.js';
|
||||
import { mockAxios } from '../mocks/axios.js';
|
||||
|
||||
describe('Waiting Memory Leak', () => {
|
||||
it('should clean up waiting map when entry is evicted from storage due to maxEntries', async () => {
|
||||
// Create storage with maxEntries=2 to force eviction
|
||||
const storage = buildMemoryStorage(false, false, 2);
|
||||
const axios = mockAxios({ storage });
|
||||
// Set a timeout on the axios instance for testing the cleanup mechanism
|
||||
axios.defaults.timeout = 100;
|
||||
|
||||
// Make 3 concurrent requests to different URLs
|
||||
// The first request should be evicted when the third one starts
|
||||
const promise1 = axios.get('url1');
|
||||
const promise2 = axios.get('url2');
|
||||
const promise3 = axios.get('url3');
|
||||
|
||||
// Wait for all requests to complete
|
||||
await Promise.all([promise1, promise2, promise3]);
|
||||
|
||||
// Poll until waiting map is empty or timeout
|
||||
const startTime = Date.now();
|
||||
while (axios.waiting.size > 0 && Date.now() - startTime < 300) {
|
||||
await new Promise((resolve) => setTimeout(resolve, 10));
|
||||
}
|
||||
|
||||
// The waiting map should be empty after timeout cleanup
|
||||
assert.equal(axios.waiting.size, 0, 'waiting map should be empty after all requests complete');
|
||||
});
|
||||
|
||||
it('should clean up waiting map when loading entry is evicted during concurrent requests', async () => {
|
||||
// Create storage with maxEntries=1 to force aggressive eviction
|
||||
const storage = buildMemoryStorage(false, false, 1);
|
||||
const axios = mockAxios({ storage });
|
||||
axios.defaults.timeout = 100;
|
||||
|
||||
// Start two concurrent requests
|
||||
const promise1 = axios.get('url1');
|
||||
const promise2 = axios.get('url2');
|
||||
|
||||
// Wait for both to complete
|
||||
const [result1, result2] = await Promise.all([promise1, promise2]);
|
||||
|
||||
// Verify responses are valid
|
||||
assert.ok(result1.data);
|
||||
assert.ok(result2.data);
|
||||
|
||||
// Poll until waiting map is empty or timeout
|
||||
const startTime = Date.now();
|
||||
while (axios.waiting.size > 0 && Date.now() - startTime < 300) {
|
||||
await new Promise((resolve) => setTimeout(resolve, 10));
|
||||
}
|
||||
|
||||
// The waiting map should be empty
|
||||
assert.equal(axios.waiting.size, 0, 'waiting map should be empty but has entries');
|
||||
});
|
||||
|
||||
it('should handle multiple waves of concurrent requests with maxEntries', async () => {
|
||||
const storage = buildMemoryStorage(false, false, 2);
|
||||
const axios = mockAxios({ storage });
|
||||
axios.defaults.timeout = 100;
|
||||
|
||||
// First wave of requests
|
||||
await Promise.all([axios.get('url1'), axios.get('url2'), axios.get('url3')]);
|
||||
|
||||
// Poll until waiting map is empty or timeout
|
||||
let startTime = Date.now();
|
||||
while (axios.waiting.size > 0 && Date.now() - startTime < 300) {
|
||||
await new Promise((resolve) => setTimeout(resolve, 10));
|
||||
}
|
||||
|
||||
assert.equal(axios.waiting.size, 0, 'waiting map should be empty after first wave');
|
||||
|
||||
// Second wave of requests
|
||||
await Promise.all([axios.get('url4'), axios.get('url5'), axios.get('url6')]);
|
||||
|
||||
// Poll until waiting map is empty or timeout
|
||||
startTime = Date.now();
|
||||
while (axios.waiting.size > 0 && Date.now() - startTime < 300) {
|
||||
await new Promise((resolve) => setTimeout(resolve, 10));
|
||||
}
|
||||
|
||||
assert.equal(axios.waiting.size, 0, 'waiting map should be empty after second wave');
|
||||
});
|
||||
});
|
||||
@ -58,10 +58,10 @@ describe('MemoryStorage', () => {
|
||||
|
||||
data.data = 'another data';
|
||||
|
||||
assert.notEqual(storage.data.key, null);
|
||||
assert.equal(storage.data.key!.state, 'cached');
|
||||
assert.notEqual(storage.data.key!.data, null);
|
||||
assert.equal(storage.data.key!.data!.data, 'data');
|
||||
assert.notEqual(storage.data.get('key'), null);
|
||||
assert.equal(storage.data.get('key')!.state, 'cached');
|
||||
assert.notEqual(storage.data.get('key')!.data, null);
|
||||
assert.equal(storage.data.get('key')!.data!.data, 'data');
|
||||
|
||||
const result = (await storage.get('key')) as CachedStorageValue;
|
||||
|
||||
@ -118,12 +118,12 @@ describe('MemoryStorage', () => {
|
||||
});
|
||||
|
||||
// Ensure that the values are still there
|
||||
assert.equal(storage.data.empty?.state, 'empty');
|
||||
assert.equal(storage.data.stale?.state, 'stale');
|
||||
assert.equal(storage.data.expiredStale?.state, 'stale');
|
||||
assert.equal(storage.data.loading?.state, 'loading');
|
||||
assert.equal(storage.data.cached?.state, 'cached');
|
||||
assert.equal(storage.data.expiredCache?.state, 'cached');
|
||||
assert.equal(storage.data.get('empty')?.state, 'empty');
|
||||
assert.equal(storage.data.get('stale')?.state, 'stale');
|
||||
assert.equal(storage.data.get('expiredStale')?.state, 'stale');
|
||||
assert.equal(storage.data.get('loading')?.state, 'loading');
|
||||
assert.equal(storage.data.get('cached')?.state, 'cached');
|
||||
assert.equal(storage.data.get('expiredCache')?.state, 'cached');
|
||||
|
||||
// Waits for the cleanup function to run
|
||||
await mockDateNow(600);
|
||||
@ -156,10 +156,10 @@ describe('MemoryStorage', () => {
|
||||
data: { ...EMPTY_RESPONSE, data: 'data' }
|
||||
});
|
||||
|
||||
assert.equal(Object.keys(storage.data).length, 2);
|
||||
assert.ok(storage.data.key);
|
||||
assert.ok(storage.data.key2);
|
||||
assert.equal(storage.data.key3, undefined);
|
||||
assert.equal(storage.data.size, 2);
|
||||
assert.ok(storage.data.get('key'));
|
||||
assert.ok(storage.data.get('key2'));
|
||||
assert.equal(storage.data.get('key3'), undefined);
|
||||
|
||||
await storage.set('key3', {
|
||||
state: 'cached',
|
||||
@ -168,11 +168,11 @@ describe('MemoryStorage', () => {
|
||||
data: { ...EMPTY_RESPONSE, data: 'data' }
|
||||
});
|
||||
|
||||
assert.equal(Object.keys(storage.data).length, 2);
|
||||
assert.equal(storage.data.size, 2);
|
||||
|
||||
assert.equal(storage.data.key, undefined);
|
||||
assert.ok(storage.data.key2);
|
||||
assert.ok(storage.data.key3);
|
||||
assert.equal(storage.data.get('key'), undefined);
|
||||
assert.ok(storage.data.get('key2'));
|
||||
assert.ok(storage.data.get('key3'));
|
||||
});
|
||||
|
||||
it('tests maxEntries with cleanup', async () => {
|
||||
@ -199,11 +199,11 @@ describe('MemoryStorage', () => {
|
||||
data: { ...EMPTY_RESPONSE, data: 'data' }
|
||||
});
|
||||
|
||||
assert.equal(Object.keys(storage.data).length, 3);
|
||||
assert.ok(storage.data.exp);
|
||||
assert.ok(storage.data['not exp']);
|
||||
assert.ok(storage.data.exp2);
|
||||
assert.equal(storage.data.key, undefined);
|
||||
assert.equal(storage.data.size, 3);
|
||||
assert.ok(storage.data.get('exp'));
|
||||
assert.ok(storage.data.get('not exp'));
|
||||
assert.ok(storage.data.get('exp2'));
|
||||
assert.equal(storage.data.get('key'), undefined);
|
||||
|
||||
await storage.set('key', {
|
||||
state: 'cached',
|
||||
@ -212,11 +212,11 @@ describe('MemoryStorage', () => {
|
||||
data: { ...EMPTY_RESPONSE, data: 'data' }
|
||||
});
|
||||
|
||||
assert.equal(Object.keys(storage.data).length, 2);
|
||||
assert.equal(storage.data.size, 2);
|
||||
|
||||
assert.equal(storage.data.exp, undefined);
|
||||
assert.equal(storage.data.exp2, undefined);
|
||||
assert.ok(storage.data['not exp']);
|
||||
assert.ok(storage.data.key);
|
||||
assert.equal(storage.data.get('exp'), undefined);
|
||||
assert.equal(storage.data.get('exp2'), undefined);
|
||||
assert.ok(storage.data.get('not exp'));
|
||||
assert.ok(storage.data.get('key'));
|
||||
});
|
||||
});
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user