fix: pass pagination limit to log callables to prevent OOM (#806)#1
Open
JeremiahChurch wants to merge 9 commits intomasterfrom
Open
fix: pass pagination limit to log callables to prevent OOM (#806)#1JeremiahChurch wants to merge 9 commits intomasterfrom
JeremiahChurch wants to merge 9 commits intomasterfrom
Conversation
f583b89 to
4c26a44
Compare
A conflict error is a more accurate choice when someone tries to delete a CA that is currently in use.
v2.7.4 Fixes & Features
Log endpoints load ALL log files (current + every rotated/compressed copy) into memory before Model::read_all() applies pagination via array_slice(). Even ?limit=1 loads the entire dataset, causing PHP memory exhaustion (512MB) on systems with moderate log volumes. This passes the pagination limit from Model::read_all() through get_internal_objects() to log-reading callables so they stop early. LogFileModelTraits changes: - read_log() accepts optional $limit parameter - When limit > 0, reads newest files first and stops when satisfied - Plain files: reverse-reads from EOF via fseek - O(limit) memory - Compressed files: streams via PHP stream wrappers (compress.zlib://, compress.bzip2://) and popen (xz) with a ring buffer - O(limit) memory for all formats - gather_log_filepaths() uses glob($base.'.*') to avoid matching unrelated files, sorts newest-first by rotation number - Unbounded reads (limit=0) preserve existing behavior exactly Model.inc changes: - get_internal_objects() accepts optional $limit hint - Forwards limit to callables via cached reflection check (backward-compatible with callables that don't accept it) - read_all() passes limit+offset down; skips optimization when reverse=true since that needs the full dataset All 6 log model callables updated to accept and forward limit. 10 new test cases covering bounded reads, edge cases, mixed compression, and unrelated file filtering. Tested on live pfSense (FreeBSD 15.0, 28MB filter.log + 31 rotated bz2 files): memory dropped from 512MB+ (OOM) to 22MB, response time 1ms for ?limit=10. Fixes pfrest#806
4c26a44 to
ec5f91b
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Fixes the PHP memory exhaustion issue described in pfrest#806.
Log endpoints (
/api/v2/status/logs/*) load all log files - including every rotated/compressed copy - into memory beforeModel::read_all()applies pagination viaarray_slice(). Even?limit=1triggers loading the entire log history, causing OOM at 512MB on systems with moderate log volumes.This PR passes the pagination
limitfromModel::read_all()down throughget_internal_objects()to the log-reading callables, so they can stop reading early once enough lines are collected.Changes
LogFileModelTraits.inc- the core fix:read_log()gains an optional$limitparameter$limit > 0: reads newest files first, stops when enough lines are collectedfseek()- O(limit) memorygather_log_filepaths()now returns newest-first ordering with explicit sort$limit = 0(default): preserves existing behavior exactlyModel.inc- plumbing:get_internal_objects()gains optionalint $limitparameterReflectionMethodto check if the callable accepts a$limitparam before passing it (backward-compatible with callables that don't)read_all()passes$limit + $offsetas the max needed hintAll 6 log models - trivial forwarding:
FirewallLog,SystemLog,DHCPLog,AuthLog,OpenVPNLog,RESTAPILogint $limit = 0and forwards it toread_log()Performance impact
?limit=50(no filter)?limit=50&text__contains=XFor
?limit=50- the common case - this typically reads only the tail of the current log file and never touches the compressed rotated copies, reducing memory from O(total_log_size) to O(limit).Backward compatibility
{'text': ...}entries)$limitparameter continue to work (reflection check)query()with filter parameters still does full reads (required for correctness)Test plan
LogFileModelTraitstest cases pass unchanged (unbounded reads)GET /api/v2/status/logs/firewall?limit=50no longer OOMs on systems with large log filesGET /api/v2/status/logs/firewall(no limit) still returns all entries🤖 Generated with Claude Code