Skip to content

Conversation

@andriyDev
Copy link
Contributor

@andriyDev andriyDev commented Oct 31, 2025

Objective

  • The previous asset processing loop was very difficult to understand (IMO).
    • The initial processing of tasks would start a bunch of tasks. Then listening would listen for events and then await on processing tasks one at a time before continuing to listen to events. Finishing a task would also add paths to a separate check_reprocess_queue which would only be checked after all the current events have been handled.
  • Also processing tasks did not occur in parallel - so we'd process assets one at a time.

Solution

Approximately throw everything out. The asset processor now does these things:

  1. Initialize the processor: same as before, recover from the transaction log, initialize the state of all processed assets (so we can lock them).
  2. Queue all the initial processing tasks: iterate through all processed sources, finding all their assets, and queue a task for them (to recheck whether they need to be processed, and reprocess them if so). Note we don't spawn any bevy_task::Tasks here.
  3. Spawn the "executor" bevy_task::Task: This task spawns the queued tasks and updates the overall state of processing (i.e., processing vs finished).
  4. Spawn the source change event listeners: spawns a bevy_task::Task for each asset source to listen on its event receiver and queued up any new tasks as source assets change.

So this parallelizes event processing from asset sources, parallelizes processing each asset, and (IMO) makes the whole processing loop much simpler.

Also I think it's funny that parallelizing could make things simpler lol.

Caveats

  • I've removed the public methods for process_assets and listen_for_source_change_events. My guess is these were public so that users can call them outside the context of a running app? I'm not entirely sure. I think this needs to be rethought though if that's the case. For one, a running app currently will not be gated on processing from another app, meaning things will probably get out-of-sync very easily. If need be, I think we can bring this back fairly straight forward. There also isn't a migration guide since there's nothing to migrate to here.
  • Parallelizing asset processing could be bad for very large tasks. Some GLTF files can get REALLY big, and managing memory there is very important (though we're still bad at this). So parallelizing asset processing can result in many tasks running concurrently consuming more memory without a way to control it. However I think this is a more general problem and we should find other solutions than "don't parallelize".

Testing

  • The asset processing tests still pass!
  • The asset_processing example seems to behave the same!

@andriyDev andriyDev added A-Assets Load files from disk to use for things like images, models, and sounds C-Performance A change motivated by improving speed, memory usage or compile times C-Code-Quality A section of code that is hard to understand or change D-Modest A "normal" level of difficulty; suitable for simple features or challenging fixes S-Needs-Review Needs reviewer attention (from anyone!) to move forward labels Oct 31, 2025
@alice-i-cecile
Copy link
Member

So parallelizing asset processing can result in many tasks running concurrently consuming more memory without a way to control it. However I think this is a more general problem and we should find other solutions than "don't parallelize".

In the future, can we have some way of a) polling memory usage and b) predicting memory usage and then gating the task spawning on that?

For now, can we add a run-time configurable maximum number of active tasks?

@andriyDev andriyDev force-pushed the simpler-processing-loop branch from 194cc07 to f9f2112 Compare October 31, 2025 04:29
@andriyDev
Copy link
Contributor Author

@alice-i-cecile I think my initial assessment was wrong. The previous implementation was no better than this PR for the initial processing case - previously it ran all the tasks concurrently rather than in parallel, but that still means that we're using up all that memory while all the tasks are running.

I think we should leave any sort of memory control to a future PR, and I don't really want to complicate this PR with like a semaphore. Thoughts?

@andriyDev
Copy link
Contributor Author

Also CI is just broken by #21702, so that's not relevant.

@andriyDev andriyDev force-pushed the simpler-processing-loop branch 2 times, most recently from a96570d to 073d971 Compare November 1, 2025 04:57
@andriyDev andriyDev requested a review from kristoff3r November 1, 2025 04:58
@andriyDev andriyDev added this to the 0.18 milestone Nov 1, 2025
@andriyDev andriyDev force-pushed the simpler-processing-loop branch from 073d971 to 8d2347c Compare November 4, 2025 00:36
Copy link
Contributor

@ChristopherBiscardi ChristopherBiscardi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense to me and feels like a good change. My first thought was also "what happens if you load 'too many' at once" with the new parallelism and no controls, but that has been addressed and there's plenty of time before the next release for that to be tested and dealt with if necessary.

@andriyDev andriyDev added S-Ready-For-Final-Review This PR has been approved by the community. It's ready for a maintainer to consider merging it and removed S-Needs-Review Needs reviewer attention (from anyone!) to move forward labels Nov 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

A-Assets Load files from disk to use for things like images, models, and sounds C-Code-Quality A section of code that is hard to understand or change C-Performance A change motivated by improving speed, memory usage or compile times D-Modest A "normal" level of difficulty; suitable for simple features or challenging fixes S-Ready-For-Final-Review This PR has been approved by the community. It's ready for a maintainer to consider merging it

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants