-
-
Notifications
You must be signed in to change notification settings - Fork 33.7k
gh-143040: Exit taychon live mode gracefully and display profiled script errors #143101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to rebase/merge main to fix conflicts
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like we have some problems with macOS and unclosed files (I think your subprocess call needs to be a context manager).
|
This may be enough (didn;t check): diff --git a/Lib/profiling/sampling/cli.py b/Lib/profiling/sampling/cli.py
index 4d2d5b37453..bb7e629a993 100644
--- a/Lib/profiling/sampling/cli.py
+++ b/Lib/profiling/sampling/cli.py
@@ -1061,13 +1061,14 @@ def _handle_live_run(args):
process.wait()
# Read any stderr output (tracebacks, errors, etc.)
if process.stderr:
- try:
- stderr = process.stderr.read()
- if stderr:
- print(stderr.decode(), file=sys.stderr)
- except (OSError, ValueError):
- # Ignore errors if pipe is already closed
- pass
+ with process.stderr:
+ try:
+ stderr = process.stderr.read()
+ if stderr:
+ print(stderr.decode(), file=sys.stderr)
+ except (OSError, ValueError):
+ # Ignore errors if pipe is already closed
+ pass
def _handle_replay(args):
diff --git a/Lib/test/test_profiling/test_sampling_profiler/test_live_collector_ui.py b/Lib/test/test_profiling/test_sampling_profiler/test_live_collector_ui.py
index b492b471f82..dec850a83e9 100644
--- a/Lib/test/test_profiling/test_sampling_profiler/test_live_collector_ui.py
+++ b/Lib/test/test_profiling/test_sampling_profiler/test_live_collector_ui.py
@@ -839,7 +839,7 @@ def mock_init_curses_side_effect(self, n_times, mock_self, stdscr):
@unittest.skipIf(is_emscripten, "subprocess not available")
def test_run_failed_module_live(self):
- """Test that running a existing module that fails exists with clean error."""
+ """Test that running a existing module that fails exits with clean error."""
args = [
"profiling.sampling.cli", "run", "--live", "-m", "test",
@@ -857,18 +857,19 @@ def test_run_failed_module_live(self):
mock.patch('sys.stderr', new=io.StringIO()) as fake_stderr
):
main()
- self.assertStartsWith(
- fake_stderr.getvalue(),
- '\x1b[31mtest test_asdasd crashed -- Traceback (most recent call last):'
- )
+ stderr = fake_stderr.getvalue()
+ # Check that error output contains the crash message and traceback
+ # (without checking exact ANSI codes which vary by environment)
+ self.assertIn('test_asdasd', stderr)
+ self.assertIn('Traceback (most recent call last):', stderr)
@unittest.skipIf(is_emscripten, "subprocess not available")
def test_run_failed_script_live(self):
"""Test that running a failing script exits with clean error."""
- script = tempfile.NamedTemporaryFile(suffix=".py")
+ script = tempfile.NamedTemporaryFile(suffix=".py", delete=False)
self.addCleanup(close_and_unlink, script)
script.write(b'1/0\n')
- script.seek(0)
+ script.flush()
args = ["profiling.sampling.cli", "run", "--live", script.name]
@@ -884,13 +885,10 @@ def test_run_failed_script_live(self):
):
main()
stderr = fake_stderr.getvalue()
- self.assertIn(
- 'sample(s) collected (minimum 200 required for TUI)', stderr
- )
- self.assertEndsWith(
- stderr,
- 'ZeroDivisionError\x1b[0m: \x1b[35mdivision by zero\x1b[0m\n\n'
- )
+ # Check that output contains the error information
+ # (without checking exact ANSI codes which vary by environment)
+ self.assertIn('ZeroDivisionError', stderr)
+ self.assertIn('division by zero', stderr)
if __name__ == "__main__": |
This script shows the error trace when a profiled script (or python module) in the live mode fails.
Some output examples:
When running a script that fails inmediately, the live mode won't even start and you'll see the traceback directly instead:
When running a test that doesn't exist, you will see a brief profiled information and when exiting the live mode you'll see this traceback: