You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The issue is primarily noticeable for frequency sweeps, and especially so when configured with little (or no) smoothing. Or, in the case of the waveform visualization, I'd expect it to appear like an effective limit on the framerate, but I haven't used it myself.
This is caused by Cava always using all of the most recent samples whenever it renders a new frame, with no consideration for their actual "age" which might vary each rendered frame, since audio is captured in fixed chunks of (approximately) 10 ms.
I've implemented a simple approach to improving this here, which estimates how "old" the most recently-received samples are and holds back some of the newest samples until a later frame if they're "too new." I went with 14 ms as the threshold, which based on my testing seems to be more than enough. The estimate is based on the difference between timestamps collected in write_to_cava_input_buffers and right before processing them into bars.
I've been using this for a few days, and also a very similar approach in a personal project of mine, and so far it works great. That said, I've only really considered my particular use case so there might be something I've overlooked. I'm also not able to test it on Windows, though I did try to make it work. I've attached a short clip comparing my changes to the latest commit as of writing (0cc460a) - slowed down by 4.8x (from 144 FPS to 30 FPS) and configured with noise_reduction=0 in order to make the difference more obvious.
The issue is primarily noticeable for frequency sweeps, and especially so when configured with little (or no) smoothing. Or, in the case of the waveform visualization, I'd expect it to appear like an effective limit on the framerate, but I haven't used it myself.
This is caused by Cava always using all of the most recent samples whenever it renders a new frame, with no consideration for their actual "age" which might vary each rendered frame, since audio is captured in fixed chunks of (approximately) 10 ms.
I've implemented a simple approach to improving this here, which estimates how "old" the most recently-received samples are and holds back some of the newest samples until a later frame if they're "too new." I went with 14 ms as the threshold, which based on my testing seems to be more than enough. The estimate is based on the difference between timestamps collected in
write_to_cava_input_buffersand right before processing them into bars.I've been using this for a few days, and also a very similar approach in a personal project of mine, and so far it works great. That said, I've only really considered my particular use case so there might be something I've overlooked. I'm also not able to test it on Windows, though I did try to make it work. I've attached a short clip comparing my changes to the latest commit as of writing (0cc460a) - slowed down by 4.8x (from 144 FPS to 30 FPS) and configured with
noise_reduction=0in order to make the difference more obvious.0001-0200.mp4