
The 3 AM Debugging Session That Changed Everything
10 Python Debugging Tricks That Saved Me Hours
It was 3 AM on a Friday. I’d been hunting the same bug for six hours straight. My code was littered with print statements like a teenager’s diary: print("here"), print("got here"), print("WHY ISN'T THIS WORKING"). The console output looked like chaos, and I was no closer to finding the issue than when I started.
That night, I discovered there was a better way. A much better way.
If you’re still debugging Python with print statements in 2026, you’re working ten times harder than you need to. After a decade of writing production Python code and mentoring hundreds of developers, I’ve compiled the debugging techniques that consistently save the most time and frustration.
These aren’t theoretical tricks from documentation. These are battle-tested methods I use every single day to squash bugs faster and maintain my sanity.
Let’s dive in.
1. Master the Built-in Breakpoint Function
The built-in breakpoint() function provides a simple way to pause code execution and start debugging instantly. It launches the default debugger so you can inspect variables, trace logic, and fix issues faster.
The Old Way
import pdb
pdb.set_trace() # Remember this every time
The New Way
def calculate_discount(price, discount_rate):
breakpoint() # Just one word, that's it
final_price = price * (1 - discount_rate)
return final_price
When your code hits breakpoint(), execution pauses and drops you into an interactive debugger. You can inspect variables, execute code, and step through your program line by line.
Why it’s better: breakpoint() is cleaner, easier to remember, and respects your PYTHONBREAKPOINT environment variable. You can switch debuggers without changing code.
Pro tip: Set PYTHONBREAKPOINT=0 in production to disable all breakpoints instantly without removing them from code.
2. Use Rich Traceback for Beautiful Error Messages
Standard Python tracebacks are functional but ugly. The rich library transforms them into readable, color-coded masterpieces.
Installation and Setup
# Install once
pip install rich
# Add to your script
from rich.traceback import install
install(show_locals=True)
Now when an error occurs, you get:
- Syntax-highlighted code context
- Local variables at each stack frame
- Clear visual separation between frames
- Better formatting for nested errors
Real-world impact: I’ve solved bugs 3x faster just by seeing variable values in the traceback instead of adding print statements to inspect them.
3. Leverage the Logging Module Properly
Print statements disappear. Logs persist. But most developers either don’t use logging or use it poorly.
The Right Way to Log
import logging
# Configure once at the start
logging.basicConfig(
level=logging.DEBUG,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler('app.log'),
logging.StreamHandler()
]
)
logger = logging.getLogger(__name__)
def process_user_data(user_id):
logger.debug(f"Processing user {user_id}")
try:
data = fetch_data(user_id)
logger.info(f"Fetched {len(data)} records for user {user_id}")
except Exception as e:
logger.error(f"Failed to fetch data for {user_id}", exc_info=True)
raise
Key advantages:
- Different log levels (DEBUG, INFO, WARNING, ERROR)
- Automatic timestamps
- Can write to files and console simultaneously
- Easy to disable debug logs in production
Game-changer: Use exc_info=True in error logs to automatically include the full traceback.
4. Inspect Objects with the Dir and Vars Functions
When working with unfamiliar objects or third-party libraries, these built-in functions are invaluable.
Quick Object Inspection
# See all attributes and methods
response = requests.get('https://api.example.com')
print(dir(response)) # Lists everything available
# See instance variables and their values
class User:
def __init__(self, name, email):
self.name = name
self.email = email
user = User("John", "john@example.com")
print(vars(user)) # {'name': 'John', 'email': 'john@example.com'}
Debugging scenario: You receive an object from an API and don’t know what attributes it has. Instead of reading documentation, dir() shows you instantly.
5. Use Assert Statements for Sanity Checks
Assertions are like automated reality checks for your assumptions. They catch bugs early before they cascade into bigger problems.
Strategic Assert Placement
def calculate_average(numbers):
assert len(numbers) > 0, "Cannot calculate average of empty list"
assert all(isinstance(n, (int, float)) for n in numbers), "All items must be numbers"
return sum(numbers) / len(numbers)
def process_file(filepath):
assert filepath.endswith('.csv'), f"Expected CSV file, got {filepath}"
# Continue processing
When to use assertions:
- Validate function inputs
- Check intermediate calculations
- Verify assumptions about data structure
Important: Python optimizes away assertions with the -O flag, so never use them for critical validation. Use them for developer sanity checks only.
6. The Traceback Module for Post-Mortem Analysis
When exceptions occur, the traceback module lets you extract and format error information programmatically.
Capture and Log Exceptions Properly
import traceback
import sys
def risky_operation():
try:
result = dangerous_function()
except Exception as e:
# Get full traceback as string
error_trace = ''.join(traceback.format_exception(*sys.exc_info()))
# Log it, send to monitoring service, etc.
logger.error(f"Operation failed:\n{error_trace}")
# Or extract specific frames
tb_lines = traceback.format_tb(e.__traceback__)
print(f"Error occurred at: {tb_lines[-1]}")
Production use case: Send detailed error traces to monitoring tools like Sentry while showing users friendly error messages.
7. IPython for Interactive Debugging
IPython provides a superior interactive Python experience with powerful debugging features.
Installation and Usage
# Install
pip install ipython
# In your code
from IPython import embed
def complex_calculation(data):
intermediate_result = step_one(data)
embed() # Drops into IPython shell with full context
final_result = step_two(intermediate_result)
return final_result
IPython advantages:
- Tab completion for everything
- Magic commands like
%timeitfor performance testing - Easy access to command history
- Better formatted output
My workflow: Use embed() to pause execution and explore the current state interactively. Test hypotheses in real-time before modifying code.
8. The PDB Post-Mortem Mode
When a script crashes, don’t restart it. Use post-mortem debugging to examine the exact state when it failed.
Debug After the Crash
import pdb
def main():
try:
problematic_function()
except Exception:
pdb.post_mortem() # Start debugger at exception point
# Or run entire scripts in post-mortem mode
# python -m pdb -c continue script.py
When the debugger starts, you can:
- Type
wto see the stack trace - Type
uanddto move up and down the stack - Type
p variable_nameto print any variable - Type
lto see surrounding code
Time saved: Instead of adding logging and rerunning, you examine the crash scene directly.
9. Use F-Strings with the = Specifier for Quick Debugging
Python 3.8 introduced the = specifier in f-strings, which is perfect for quick debug output.
Self-Documenting Debug Prints
# Old way
user_id = 12345
print("user_id:", user_id)
# New way
print(f"{user_id=}") # Output: user_id=12345
# Multiple variables
username = "john_doe"
is_active = True
print(f"{username=}, {is_active=}")
# Output: username='john_doe', is_active=True
# With expressions
numbers = [1, 2, 3, 4, 5]
print(f"{sum(numbers)=}, {len(numbers)=}")
# Output: sum(numbers)=15, len(numbers)=5
Why I love this: When you need quick visibility without setting up logging, this gives you variable names and values in one line.
10. The Icecream Library for Better Debug Output
If you must use print-style debugging, use the icecream library. It’s print statements on steroids.
Installation and Usage
# Install
pip install icecream
# Use
from icecream import ic
def calculate_total(items):
ic(items) # Shows: ic| items: [{'price': 10}, {'price': 20}]
total = sum(item['price'] for item in items)
ic(total) # Shows: ic| total: 30
return total
# See execution flow
ic.configureOutput(includeContext=True)
ic(expensive_operation())
# Shows filename, line number, function name, and value
Icecream advantages:
- Automatically shows variable names
- Includes context (file, line, function)
- Pretty-prints complex data structures
- Can be disabled globally in production
Bonus: Debugging Performance Issues
Not all bugs are logic errors. Sometimes your code is slow, and you need to find the bottleneck.
Quick Performance Profiling
import cProfile
import pstats
# Profile a function
cProfile.run('slow_function()', 'output.prof')
# Analyze results
stats = pstats.Stats('output.prof')
stats.sort_stats('cumulative')
stats.print_stats(10) # Show top 10 slowest functions
# Or use line_profiler for line-by-line analysis
# pip install line_profiler
# kernprof -l -v script.py
Putting It All Together: My Debugging Workflow
Here’s how I approach debugging in 2026:
- First pass: Use
breakpoint()oric()for quick inspection - Complex issues: Set up proper logging with different levels
- Crashes: Use
pdb.post_mortem()to examine the failure state - Unknown objects: Use
dir()andvars()to explore - Production errors: Rich tracebacks + traceback module for detailed logs
- Performance: Profile with cProfile before optimizing
Conclusion: Debug Smarter, Not Harder
Print statements have their place for quick checks, but they’re the least efficient debugging method available. By mastering these 10 techniques, you’ll:
- Find bugs in minutes instead of hours
- Write cleaner, more maintainable debug code
- Understand your program’s behavior deeply
- Catch issues before they reach production
Start with breakpoint() and proper logging this week. Add the others to your toolkit gradually. Your future self will thank you when you’re hunting down a bug at 3 AM and actually finding it quickly.
What debugging techniques do you use? Share your favorites in the comments below.
Post you may also like –
Best UV Package Manager: Why Python Devs Are Ditching pip in 2026
Python Async/Await: Common Mistakes That Kill Performance (2026)
