The fourth part of my journey down the Python debugger rabbit hole (part 1, part 2, and part 3).
In this article, we’ll be looking into how changes introduced in Python 3.12 can help us with one of the most significant pain points of our current debugger implementation: The Python interpreter essentially calls our callback at every line of code, regardless if we have a breakpoint in the currently running method. But why is this the case?
Consider the following example program:
def callee(i): i = i + 1 return i + 1 def caller(i): j = i * 2 j = callee(j) return j + 1 caller(10) caller(20)
You can find this code in the test.py file in the GitHub repository.
Consider now that the user placed a breakpoint in line 3 in the method callee. If this is the only breakpoint, we should be able to skip setting a callback for the caller method lines. The problem is that we can only set (or not set) the callback for lines (and more) at the beginning of the execution of every method.
This is a problem when the user adds breakpoints to methods up the call stack. In our example, the user might want to set a breakpoint at line 8 in the caller
method while at the breakpoint in line 3. Therefore, we have to return a callback for every method in our sys.settrace
handler.
Short Introduction to PEP 669
This is where PEP 669 – Low Impact Monitoring for CPython by Mark Shannon comes to the rescue: With this new API, we can register and deregister callbacks on the method and global level for every event, even while currently running a method and configure the events that cause the callback to be triggered. It is possible to activate additional events on the method level but not to disable globally set events.
Furthermore, it also supports the concept of tools, where callbacks and set events can be configured per tool (6 different tools are possible). This allows for even more granular configurations and the possibility of having multiple libraries using the monitoring API simultaneously.
While the PEP 669 is great to read and gives all the information on the new API, the following is a tiny demo that uses all the functionality we’ll need to implement our debugger later. You can find the demo code in the misc/new_api_demo folder.
For this demo, we introduce a small logging library that allows you to log the evaluation of lines and method starts. The API is quite simple yet uses nearly all functions of the PEP 669 API (file dbg.py):
import sys # the new monitoring API lives in sys.monitoring from types import CodeType # some aliases mon = sys.monitoring E = mon.events class Debugger: """ Demo for the new monitoring API """ def __init__(self, tool_id: int = mon.DEBUGGER_ID): # We use the debugger id by default # others available are (typically used for different use cases): # sys.monitoring.COVERAGE_ID = 1 # sys.monitoring.PROFILER_ID = 2 # sys.monitoring.OPTIMIZER_ID = 5 self.tool_id = tool_id # from the documentation: # sys.monitoring.use_tool_id raises a ValueError if id is in use. mon.use_tool_id(self.tool_id, "dbg") # register callbacks for the events we are interested in # LINE: # An instruction is about to be executed that has # a different line number from the preceding instruction. # (doc) mon.register_callback(self.tool_id, E.LINE, self.line_handler) # PY_START: # Start of a Python function (occurs immediately after the call, # the callee’s frame will be on the stack) mon.register_callback(self.tool_id, E.PY_START, self.start_handler) # We enable the PY_START event globally # Be aware that setting global events is regarded to be quite expensive # when done late in the program. mon.set_events(self.tool_id, E.PY_START) def line_handler(self, code: CodeType, line_number: int): """ Handler for the LINE event """ print(f" {code.co_name}: {line_number}") def start_handler(self, code: CodeType, instruction_offset: int): """ Handler for the PY_START event """ if code.co_filename != __file__: # only print if we are not in this file print(f"started {code.co_name}") def enable_line_event(self, code: CodeType): """ Enable line events for a specific code object """ mon.set_local_events(self.tool_id, code, E.LINE) def disable_local_events(self, code: CodeType): """ Disable all local events Global events are still emitted """ mon.set_local_events(self.tool_id, code, 0)
This class can now be used for basic instrumentation in our test.py example from above:
from dbg import Debugger dbg = Debugger() def callee(i): i = i + 1 return i + 1 def caller(i): # enable line events for caller dbg.enable_line_event(caller.__code__) j = i * 2 j = callee(j) # disable all local events, like line events, for callee dbg.disable_local_events(caller.__code__) return j + 1 caller(10) # enable line events for callee dbg.enable_line_event(callee.__code__) caller(20)
In this example, we enable the line events only for a portion of the caller
method. This would not be possible with the old API. This shows how powerful this new API is, making our debugger code less finicky and debugging faster.
Now we implement a basic debugger, without single-stepping support, based on Dbg
class, we wrote in part 3 of this article series.
Basic Debugger
The debugger uses lots of the code from before. It sets handlers for line and start events and enables start events globally and line events locally for the main application code object. The latter is only required because we want to open a shell when the first line of our application’s code is executed (you can find the code in file dbg3.py):
class NewDbg(dbg2.Dbg): """ PEP 669 (Low Impact Monitoring for CPython) based debugger """ def __init__(self, tool_id: int = mon.DEBUGGER_ID): super().__init__() self.tool_id = tool_id self.code_objects_with_local_events = set() # register the tool mon.use_tool_id(self.tool_id, "dbg") # register callbacks for the events we are interested in mon.register_callback(self.tool_id, E.LINE, self.line_handler) mon.register_callback(self.tool_id, E.PY_START, self.start_handler) # enable PY_START event globally mon.set_events(self.tool_id, E.PY_START) def _process_compiled_code(self, code: types.CodeType): # enable line events for the main application code object mon.set_local_events(self.tool_id, code, E.LINE) self._initial_code_object = code
In the line event handler, the code handles line events for the first line and checks whether the current line has a breakpoint, opening a breakpoint shell if necessary:
def line_handler(self, code: CodeType, line_number: int): """ Handler for the LINE event """ frame = sys._getframe(1) if self._is_first_call: if code == self._initial_code_object: # we are in the first call self._is_first_call = False # run the start shell self._breakpoint(frame, reason="start") return if (br := self.manager.get_breakpoint(code, line_number)) is not None: # we have a breakpoint if br.test(frame.f_globals, frame.f_locals): # breakpoint is enabled self._breakpoint(frame)
The start handler, in turn, checks that we’re not in any setup code and enables the local events for all code objects that contain breakpoints:
def start_handler(self, code: CodeType, instruction_offset: int): """ Handler for the PY_START event """ if (self._is_first_call or code.co_filename == __file__ or code.co_filename == dbg2.__file__): # we are in the first call, or in this file, or in dbg2.py return if self.manager.has_breakpoints_in_code_object_and_update(code): # enable events for this code object if we have breakpoints self.enable_local_events(code)
The local events are enabled by methods we already used in the demo, with the only addition that we store the code objects that currently have enabled local events:
def enable_local_events(self, code: CodeType): """ Enable line events for a specific code object if needed """ if code in self.code_objects_with_local_events: return mon.set_local_events(self.tool_id, code, E.LINE) self.code_objects_with_local_events.add(code) def disable_local_events(self, code: CodeType): """ Disable all local events for a specific code object if needed """ if code not in self.code_objects_with_local_events: return mon.set_local_events(self.tool_id, code, 0) self.code_objects_with_local_events.discard(code)
But what happens when we add a breakpoint in a method that is currently executed? We handle this situation by post-processing all code ids where a breakpoint is added or removed while in the debugging shell:
def _post_process(self, modified_code_ids: Set[CodeId]): assert self._single_step is None, "stepping not yet implemented" for code_id in modified_code_ids: info = self.manager.get_code_info(code_id) if info is None or info.code is None: continue if info.breakpoints: # enable local events if we have breakpoints self.enable_local_events(info.code) else: # disable local events if we have no breakpoints self.disable_local_events(info.code)
This gives us a basic debugger that we can already use:
➜ python3.12 -m dbg3 test.py Tiny debugger https://github.com/parttimenerd/python-dbg/ > 1 def callee(i): 2 i = i + 1 3 return i + 1 4 start at test.py:1 (<module>) >>> break_at_line("test.py", "callee", 2, "i == 20") >>> cont() 1 def callee(i): > 2 * i == 20 i = i + 1 3 return i + 1 4 5 breakpoint at test.py:2 (callee) >>> i 20 >>> exit()
Now we only have to implement single stepping to make it a proper debugger:
Single-Stepping
Sometimes, things can be easier than thought: We can use most of the code of the implementation already described in part 2 of the article series. Therefore, I don’t focus on the algorithmic part but rather on the differences in the following.
We can copy the _should_single_step
method:
def _should_single_step(self, frame: types.FrameType, event: Union[Literal['return'], Literal['line']]) \ -> bool: if not self._single_step: return False if self._single_step.mode == dbg2.StepMode.over: # ignore frames other than the one we are stepping in # when we're stepping over return frame == self._single_step.frame if self._single_step.mode == dbg2.StepMode.into: # we are always stepping if we're stepping into return True if self._single_step.mode == dbg2.StepMode.out and event == 'return': # we are stepping if we're stepping out and we have a return event return frame == self._single_step.frame return False
Our current debugger implementation does not yet handle return events, but we need this to implement stepping out properly, so we register a return (PY_RETURN
) handler and enable the return event locally whenever we enable local events. In the return handler, we open the shell for the calling code line as before and disable local events in the current method if we don’t need them:
def return_handler(self, code: CodeType, instruction_offset: int, retval: object): frame = sys._getframe(1) if self._should_single_step(frame, 'return'): if frame.f_back: self._single_step.frame = frame.f_back self._breakpoint(frame.f_back, reason="step") return if not self.manager.has_breakpoints_in_code_object_and_update(code): # disable local events if we have no breakpoints # we need this because step-into might have enabled local events self.disable_local_events(code)
We also have to modify our line handler, as it has to handle single stepping, too:
def line_handler(self, code: CodeType, line_number: int): """ Handler for the LINE event """ frame = sys._getframe(1) if self._is_first_call: # ... if (br := self.manager.get_breakpoint(code, line_number)) is not None: # we have a breakpoint # ... if self._should_single_step(frame, 'line'): # we are in single step mode if self._single_step.mode == dbg2.StepMode.out: return self._single_step.mode = None self._breakpoint(frame, reason="step")
Our start handler takes care of enabling the local events when necessary if the current method has a breakpoint or we’re currently single-stepping in(to) it:
def start_handler(self, code: CodeType, instruction_offset: int): """ Handler for the PY_START event """ if (self._is_first_call or code.co_filename == __file__ or code.co_filename == dbg2.__file__): # ... if self.manager.has_breakpoints_in_code_object_and_update(code) or \ (self._single_step and (self._single_step.frame.f_code == code or self._single_step.mode == dbg2.StepMode.into)): # enable events for this code object if we have breakpoints self.enable_local_events(code)
When we’re currently single-stepping inside a method, we have to ensure that we enable the local events for the current method. So that’s what we’re doing in the _post_process
method:
def _post_process(self, modified_code_ids: Set[CodeId]): # ... if self._single_step: self.enable_local_events(self._single_step.frame.f_code)
This gives us the final debugger in dbg3.py, which supports (conditional) breakpoints and stepping into, over, and out of methods:
Tiny debugger https://github.com/parttimenerd/python-dbg/ > 1 def callee(i): 2 i = i + 1 3 return i + 1 4 start at test.py:1 (<module>) >>> break_at_line("test.py", "callee", 2, "i == 20") >>> step() 2 * i == 20 i = i + 1 3 return i + 1 4 5 > 6 def caller(i): 7 j = i * 2 8 j = callee(j) 9 return j + 1 step at test.py:6 (<module>) >>> cont() 1 def callee(i): > 2 * i == 20 i = i + 1 3 return i + 1 4 5 breakpoint at test.py:2 (callee) >>> step_out() 4 5 6 def caller(i): 7 j = i * 2 > 8 j = callee(j) 9 return j + 1 10 11 step at test.py:8 (caller) >>> step_out() 8 j = callee(j) 9 return j + 1 10 11 > 12 caller(10) 13 caller(20) step at test.py:12 (<module>) >>> step() 9 return j + 1 10 11 12 caller(10) > 13 caller(20) step at test.py:13 (<module>) >>> step_into() 3 return i + 1 4 5 6 def caller(i): > 7 j = i * 2 8 j = callee(j) 9 return j + 1 10 step at test.py:7 (caller) >>> exit()
Conclusion
Altogether, the new API reduces the number of callbacks called drastically and thereby improves the debugger performance, running the program far faster between breakpoints and allowing us to control the events sent on a granular level. This API is far superior to the rather crude sys.settrace
. I’m happy that the API landed in Python 3.12 and hope the debugger and profiler vendors will adopt it. The debugger that results from this blog post is a usable demo that will hopefully inspire you and others to create their own. But of course, there are many ways to optimize its implementation further.
Thanks for reading the (possibly) last installment in this series. I’m happy to give a presentation on this topic if you know a user group or conference…