Skip to content

@uppy/utils,@uppy/xhr-upload: create and use new priority queue#6105

Draft
Murderlon wants to merge 9 commits intomainfrom
new-priority-queue
Draft

@uppy/utils,@uppy/xhr-upload: create and use new priority queue#6105
Murderlon wants to merge 9 commits intomainfrom
new-priority-queue

Conversation

@Murderlon
Copy link
Member

Closes #4173, kind off

Problem

Current situation with RateLimitedQueue

  • Track concurrency: keep a running count, only dispatch up to limit, accept Infinity.
  • Priority enqueue: queued handlers sorted by priority; dequeued in that order.
  • Lifecycle bookkeeping: each job gets abort()/done() hooks; abort for running vs queued differs; decrement active count and advance queue via microtasks.
  • Requeue placeholders: supports a shouldBeRequeued marker so callers can hold/retry slots without running immediately.
  • Function wrappers: wrap sync or async fns to enforce the queue and return abortable handles.
  • Cancellation plumbing: provides abortOn(signal) to bind queued/running work to an AbortSignal; abortable promises carry abort().
  • Pausing: pause(duration?) freezes dispatch (optional auto-resume); resume() restarts up to the current limit.
  • Rate limiting/backoff: rateLimit(duration) pauses, drops concurrency, then ramps it back toward the previous upper bound over time.

Case in point: it's some sort of made up, monstrosity data structure trying to be too many things. It also has rate limiting and exponential backoff but that's already in fetcher too.

Would be better if we had separation of concerns and proven data structures.

Solution

A "dumb" promise-based priority queue that doesn't care at all about what a promise does or if it needs to be retried. The promise inside determines if it has retrying and exponential backoff, such as a fetcher promise.

To not make this a breaking change and a huge diff, we still implement this per uploader, starting with xhr-upload, and keep backwards compatibility in the interface so we can still pass it to companion-client, which needs to share the same queue.

Ideal future

When you think about it, it's odd that we implement a queue per uploader if a queue is so central to uppy working correctly. It's even more odd that we also have to inject that queue into companion-client per uploader for queue sharing.

Ideally, core is responsible for having the queue. All uploaders would do is push a promise to core and core doesn't care if that promise is a tus, xhr, or S3 upload.

@Murderlon Murderlon self-assigned this Dec 12, 2025
@changeset-bot
Copy link

changeset-bot bot commented Dec 12, 2025

⚠️ No Changeset found

Latest commit: 693364e

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces a new TaskQueue implementation to replace the existing RateLimitedQueue, providing better separation of concerns through a simpler, promise-based priority queue. The change is initially implemented in the xhr-upload plugin while maintaining backward compatibility with companion-client.

Key changes:

  • New TaskQueue class with binary max-heap priority queue implementation
  • Updated xhr-upload to use TaskQueue instead of RateLimitedQueue
  • Maintained legacy compatibility methods (wrapPromiseFunction, abortOn) for companion-client integration

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
packages/@uppy/utils/src/TaskQueue.ts New priority queue implementation with max-heap, abort support, and pause/resume functionality
packages/@uppy/utils/src/TaskQueue.test.ts Basic test coverage for priority ordering, abort handling, and clear functionality
packages/@uppy/utils/src/index.ts Exports for new TaskQueue types and class
packages/@uppy/xhr-upload/src/index.ts Migrated from RateLimitedQueue to TaskQueue, simplified abort signal handling
packages/@uppy/locales/src/en_US.ts Removed unused chooseFiles string and added failedToAddFiles

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Collaborator

@qxprakash qxprakash left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one has to go through RateLimitedQueue.ts to appreciate how clean this looks, I just have few minor questions nothing major.

/**
* Pause the queue. Running tasks continue, but no new tasks start.
*/
pause(): void {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

previously we had timed pausing ( 5000 ms ) and auto resume which was used by rateLimit , this is still remaining to be implemented here right ? or that will be handled in fetcher ?

pause(duration: number | null = null): void {
this.#paused = true
clearTimeout(this.#pauseTimer)
if (duration != null) {
this.#pauseTimer = setTimeout(this.#resume, duration)
}
}
/**
* Pauses the queue for a duration, and lower the limit of concurrent requests
* when the queue resumes. When the queue resumes, it tries to progressively
* increase the limit in `this.#increaseLimit` until another call is made to
* `this.rateLimit`.
* Call this function when using the RateLimitedQueue for network requests and
* the remote server responds with 429 HTTP code.
*
* @param {number} duration in milliseconds.
*/
rateLimit(duration: number): void {
clearTimeout(this.#rateLimitingTimer)
this.pause(duration)
if (this.limit > 1 && Number.isFinite(this.limit)) {
this.#upperLimit = this.limit - 1
this.limit = this.#downLimit
this.#rateLimitingTimer = setTimeout(this.#increaseLimit, duration)
}

const tasks = this.#heap.clear()
const error = reason ?? new DOMException('Cleared', 'AbortError')
for (const task of tasks) {
task.controller.abort(error)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit pick : is this necessary ? when you call the abort listener ( after #heap.clear() above )

// Handle abort while queued
controller.signal.addEventListener(
'abort',
() => {
if (this.#heap.remove(queuedTask as QueuedTask<unknown>)) {
reject(
controller.signal.reason ??
new DOMException('Aborted', 'AbortError'),
)
}
},
{ once: true },

here the conditional if (this.#heap.remove(queuedTask as QueuedTask<unknown>)) would return false because heap is already cleared

Comment on lines +249 to +270
task
.run()
.then(
(result) => {
if (task.controller.signal.aborted) {
task.reject(
task.controller.signal.reason ??
new DOMException('Aborted', 'AbortError'),
)
} else {
task.resolve(result)
}
},
(error) => {
task.reject(error)
},
)
.finally(() => {
this.#running--
this.#advance()
})
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we wrap this with a promise chain ? or a try catch ? we might have an edgecase here , if the task.run() throws synchronously instead , that would lead to wrong running counts as then() , finally() won't get executed , this was handled explicitly in the old code

queuedRequest = this.run(() => {
let cancelError: ReturnType<typeof createCancelError>
let innerPromise: Promise<Awaited<ReturnType<T>>>
try {
innerPromise = Promise.resolve(fn(...args))
} catch (err) {
innerPromise = Promise.reject(err)
}

@qxprakash qxprakash changed the base branch from main to uppy-aws-s3-rewrite February 2, 2026 12:35
@qxprakash qxprakash changed the base branch from uppy-aws-s3-rewrite to main February 4, 2026 12:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Rethink the rate limited queue

2 participants