How to log in Python
Learn to log in Python effectively. This guide covers methods, tips, real-world applications, and how to debug common errors.

Logs provide crucial insight into your Python application's behavior, performance, and errors. They are an essential tool you can use to debug and monitor your code.
In this article, you'll explore effective techniques and practical tips. You'll also find real-world applications and debugging advice to help you master your application's output and troubleshoot issues.
Using print() for basic logging
def calculate_sum(a, b):
print(f"Calculating sum of {a} and {b}")
result = a + b
print(f"Result: {result}")
return result
calculate_sum(5, 3)--OUTPUT--Calculating sum of 5 and 3
Result: 8
The print() function is often the first tool developers reach for to get visibility into their code. In the calculate_sum function, the print() statements serve as basic log entries. They announce the function's start and reveal the values of variables at specific points, which is a quick way to trace the execution path and debug simple logic.
While useful for quick checks, this approach isn't scalable. It mixes debugging messages with actual program output and lacks features like log levels or easy redirection to a file. For more complex applications, you'll want a more robust solution beyond basic console logging in Python.
Basic logging techniques
When print() falls short, Python's built-in logging module offers a more robust framework for controlling your application's output.
Using the built-in logging module
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger()
logger.info("Application starting")
logger.warning("This is a warning message")
logger.error("This is an error message")--OUTPUT--INFO:root:Application starting
WARNING:root:This is a warning message
ERROR:root:This is an error message
The logging module offers a flexible framework for emitting messages. You configure it once using logging.basicConfig(), setting a minimum severity level like level=logging.INFO. This ensures only messages of that level or higher are displayed. You can then log messages using different methods based on their importance.
info()tracks normal application behavior.warning()highlights potential issues that don't break the code.error()signals a problem that prevented an operation from succeeding.
Configuring log levels with setLevel()
import logging
logging.basicConfig(level=logging.WARNING)
logger = logging.getLogger()
logger.debug("This debug message won't be displayed")
logger.info("This info message won't be displayed")
logger.warning("This warning message will be displayed")
logger.error("This error message will be displayed")--OUTPUT--WARNING:root:This warning message will be displayed
ERROR:root:This error message will be displayed
By setting level=logging.WARNING, you're telling the logger to only show messages with a severity of WARNING or higher. This is a powerful way to filter your output. As a result, the debug() and info() calls are ignored because they fall below this threshold, as seen in the example's output.
This filtering is based on a hierarchy of levels, from lowest to highest severity:
DEBUGINFOWARNINGERRORCRITICAL
Setting a level automatically includes all levels above it, giving you precise control over log verbosity.
Formatting log messages with Formatter
import logging
format_str = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
logging.basicConfig(format=format_str, level=logging.INFO)
logger = logging.getLogger("my_app")
logger.info("Processing data")
logger.error("Failed to connect to database")--OUTPUT--2023-07-23 15:42:10,123 - my_app - INFO - Processing data
2023-07-23 15:42:10,124 - my_app - ERROR - Failed to connect to database
You can customize your log output for better readability by passing a template string to the format parameter in logging.basicConfig(). This string uses special placeholders to structure each message, adding valuable context to every entry.
%(asctime)sinserts the time the log was created.%(name)sadds the logger's name, which you set withgetLogger().%(levelname)sshows the message's severity, likeINFOorERROR.%(message)sis the actual content you're logging.
Advanced logging techniques
With the fundamentals covered, you can implement more sophisticated logging using FileHandler for files, creating multiple loggers, and applying advanced dictConfig setups.
Using FileHandler to log to files
import logging
logger = logging.getLogger("app")
logger.setLevel(logging.DEBUG)
file_handler = logging.FileHandler("app.log")
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
logger.debug("Debug information saved to file")--OUTPUT--(No console output, but app.log file contains:)
2023-07-23 15:45:22,345 - DEBUG - Debug information saved to file
Instead of printing to the console, you can direct logs to a file using FileHandler. It’s a great way to keep a persistent record of your application's activity for later analysis. This setup also separates your logs from the standard output, which keeps your console clean.
- First, create a
FileHandlerinstance, telling it which file to write to, like"app.log". - Then, attach this handler to your logger using the
addHandler()method.
Now, all messages sent through that logger will be written directly to the specified file.
Creating multiple loggers for different components
import logging
# Configure root logger
logging.basicConfig(level=logging.WARNING)
# Create specific loggers
db_logger = logging.getLogger("database")
db_logger.setLevel(logging.DEBUG)
api_logger = logging.getLogger("api")
api_logger.setLevel(logging.INFO)
db_logger.debug("Connected to database")
api_logger.info("API request received")--OUTPUT--INFO:api:API request received
In a larger application, you can isolate logs from different components, like your database and API. You create separate loggers using logging.getLogger() with a unique name for each module. This gives you granular control over log verbosity across your app.
- Each logger can have its own severity threshold set with
setLevel(). - In the example, the
db_loggeris set toDEBUG, while theapi_loggeris set toINFO.
This setup lets you tune the output for different modules independently. The final output shows the API's INFO message, but the more granular database DEBUG message is hidden.
Using dictConfig for advanced configuration
import logging
import logging.config
config = {
'version': 1,
'formatters': {'standard': {'format': '%(asctime)s [%(levelname)s] %(name)s: %(message)s'}},
'handlers': {'default': {'level': 'INFO', 'formatter': 'standard', 'class': 'logging.StreamHandler'}},
'loggers': {'': {'handlers': ['default'], 'level': 'INFO', 'propagate': True}}
}
logging.config.dictConfig(config)
logger = logging.getLogger("main")
logger.info("Application configured with dictConfig")--OUTPUT--2023-07-23 15:50:05,678 [INFO] main: Application configured with dictConfig
For more complex scenarios, logging.config.dictConfig() allows you to define your entire logging setup using a dictionary. This is a clean and scalable way to manage configurations, especially if you load them from an external file like a JSON or YAML. This single configuration object lets you orchestrate multiple components at once.
formattersdefine the layout of your log messages.handlersdetermine where logs are sent, such as the console withStreamHandler.loggerstie everything together, applying specific handlers and levels to different parts of your application.
Move faster with Replit
Replit is an AI-powered development platform where all Python dependencies pre-installed, so you can skip setup and start coding instantly. Describe what you want to build, and Agent 4 handles everything from writing the code to connecting databases and deploying your app.
Instead of piecing together techniques, describe the app you want to build and the Agent will take it from idea to working product:
- A log analysis tool that parses an
app.logfile and generates a summary of allERRORmessages. - A real-time dashboard that visualizes formatted logs from separate application components like a database and an API.
- A log conversion utility that transforms unstructured
print()outputs into structured JSON for easier analysis.
Simply describe your app, and Replit will write the code, test it, and fix issues automatically, all within your browser.
Common errors and challenges
Even with a powerful logging module, you can run into a few common pitfalls that might leave you scratching your head.
Forgetting to configure the logger before using logger.info()
A frequent oversight is calling a function like logger.info() before configuring the root logger. When this happens, nothing is printed to the console because the logging module doesn't know what level of messages to show or where to send them. Always remember to set up a basic configuration first.
Misunderstanding logger hierarchy with getLogger()
The logger hierarchy can also be a source of confusion. When you use getLogger() with dot-separated names (e.g., app.api), you create parent-child relationships.
- By default, child loggers pass their messages up to their parent loggers and inherit their settings.
- This means if a parent's level is set to
WARNING, it will silently discard a child'sINFOmessages, leading to logs that seem to disappear.
Duplicate log messages from multiple handlers
Seeing duplicate log messages is another classic issue. This typically occurs when you add handlers to both a child logger and one of its ancestors in the hierarchy.
- Because messages propagate up the chain by default, each handler outputs the same message, creating noisy and redundant logs.
- You can prevent this by setting
logger.propagate = Falseon the child logger, which tells it not to forward messages to its parent's handlers.
Forgetting to configure the logger before using logger.info()
A frequent oversight is calling a function like logger.info() before any configuration. By default, the root logger is set to the WARNING level, so any messages with lower severity, like INFO, are simply discarded. The following code demonstrates this behavior.
import logging
logger = logging.getLogger("my_app")
logger.info("This message won't appear")
logger.error("This error might show depending on default level")
Because the logger isn't configured, it only pays attention to WARNING messages and above. The info() call is therefore ignored, but the error() message gets through. Check the corrected implementation in the code that follows.
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("my_app")
logger.info("This message will now appear")
logger.error("This error message will appear too")
The fix is simple: call logging.basicConfig(level=logging.INFO) before any logging functions. This configures the root logger to show all messages of INFO severity and higher. As a result, both the info() and error() messages now appear as intended. It’s a good practice to establish this baseline configuration at the very start of your application to ensure you don't miss any important logs during development.
Misunderstanding logger hierarchy with getLogger()
The getLogger() function creates a hierarchy where child loggers inherit settings from their parents. This behavior can be confusing when log messages seem to disappear unexpectedly. The following code shows what happens when a child logger's level isn't explicitly configured.
import logging
# Set root logger to WARNING
logging.basicConfig(level=logging.WARNING)
# Create child logger but don't set level
logger = logging.getLogger("my_module")
logger.info("This won't be displayed")
logger.warning("This will be displayed")
Because the my_module logger's level isn't set, it inherits the root's WARNING threshold. This is why its info() message is ignored. The following code demonstrates how to fix this behavior.
import logging
# Set root logger to WARNING
logging.basicConfig(level=logging.WARNING)
# Create child logger and explicitly set level
logger = logging.getLogger("my_module")
logger.setLevel(logging.INFO)
logger.info("This will now be displayed")
logger.warning("This will be displayed too")
The fix is to give the child logger its own threshold by calling logger.setLevel(logging.INFO). This overrides the inherited level from the parent, allowing the my_module logger to handle messages at INFO severity or higher. You gain granular control, which is crucial in modular applications where you need different components to report logs with varying levels of detail without being silenced by a global setting.
Duplicate log messages from multiple handlers
Seeing the same log message twice is a common sign you've attached multiple handlers to the same logger or its ancestors. Because messages propagate up the hierarchy, each handler outputs the same message, creating redundant output. The following code demonstrates this common issue.
import logging
# Set up root logger with a handler
logging.basicConfig(level=logging.INFO)
# Add another handler to the root logger
handler = logging.StreamHandler()
logging.getLogger().addHandler(handler)
logging.info("Why is this printed twice?")
The logging.basicConfig() call automatically adds a default handler. The code then adds a second StreamHandler to the same root logger, causing every message to be processed twice. The following code shows how to fix this.
import logging
# Either use basicConfig alone
logging.basicConfig(level=logging.INFO)
logging.info("This appears once")
# Or configure a logger with exactly one handler
logger = logging.getLogger("my_logger")
logger.setLevel(logging.INFO)
handler = logging.StreamHandler()
logger.addHandler(handler)
logger.info("This also appears once")
The fix is to ensure a logger has only one handler. You can either use logging.basicConfig() by itself, as it sets up a single default handler, or create a custom logger and attach just one handler using addHandler(). This problem often arises when you add handlers at multiple levels of the logger hierarchy, so be mindful of your configuration to avoid messages being processed more than once.
Real-world applications
Beyond fixing common errors, you can apply these techniques to handle exceptions in try-except blocks and manage log files with RotatingFileHandler. These logging strategies are also essential when building applications through vibe coding.
Logging exceptions with try-except blocks
By using the logging module within a try-except block, you can capture not just the error message but also the full exception details to help diagnose the root cause.
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger()
try:
result = 10 / 0 # This will cause a ZeroDivisionError
except Exception as e:
logger.error(f"An error occurred: {e}", exc_info=True)
When an operation in a try block fails, the except block lets you handle the error gracefully. By using logger.error() with exc_info=True, you’re telling the logging module to do more than just record your custom message.
- It automatically captures the full exception traceback.
- This provides the complete context of the failure, including the file and line number.
This approach is far more powerful than just logging the error variable, as it gives you the exact information needed to debug the issue efficiently. Learn more about using try and except in Python for comprehensive error handling.
Implementing a rotating log file with RotatingFileHandler
The RotatingFileHandler prevents log files from growing indefinitely by automatically creating backups once the file reaches a specified size.
import logging
from logging.handlers import RotatingFileHandler
logger = logging.getLogger("app")
logger.setLevel(logging.DEBUG)
# Set up a rotating file handler (5 KB max size, keep 3 backup files)
handler = RotatingFileHandler("app.log", maxBytes=5*1024, backupCount=3)
handler.setFormatter(logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s'))
logger.addHandler(handler)
logger.info("This message is logged to a rotating file")
The RotatingFileHandler offers a practical way to manage log files so they don't consume too much disk space. It works by monitoring the log file and automatically cycling to a new one when a size limit is reached. This setup is ideal for long-running applications where logs could otherwise grow uncontrollably. Understanding reading a text file in Python is essential when analyzing these log files.
- The
maxBytesparameter defines the size limit for the current log file. - Once that limit is hit, the handler renames the file and starts fresh.
backupCountspecifies how many of these older, renamed log files to keep.
Get started with Replit
Turn what you've learned into a real tool. Describe your goal to Replit Agent, like “a utility that generates a daily log summary” or “a script that sends alerts for critical errors.”
The Agent writes the code, tests for errors, and deploys your app. Start building with Replit.
Describe what you want to build, and Replit Agent writes the code, handles the infrastructure, and ships it live. Go from idea to real product, all in your browser.
Describe what you want to build, and Replit Agent writes the code, handles the infrastructure, and ships it live. Go from idea to real product, all in your browser.



