How to calculate execution time in Python
Learn how to calculate Python execution time. Discover methods, tips, real-world applications, and how to debug common errors.

To optimize your Python code, you must measure its execution time. This process helps identify bottlenecks and improve efficiency, which provides key insights into how your scripts perform.
In this article, you'll explore several techniques to measure runtime. You'll find practical tips, see real-world applications, and get advice to debug performance issues, so you can choose the best method.
Using time.time() to measure execution time
import time
start_time = time.time()
# Code to time
for i in range(1000000):
pass
end_time = time.time()
print(f"Execution time: {end_time - start_time:.6f} seconds")--OUTPUT--Execution time: 0.031257 seconds
The time.time() function provides a straightforward method for measuring performance by capturing the system's clock time. The approach involves two main steps:
- You record the time just before the code block executes by calling
time.time(). - You call it again immediately after the block finishes.
By subtracting the start time from the end time, you get the elapsed "wall-clock time." This value represents the total real-world duration, including any time the program was idle or waiting for the operating system, not just the time spent on CPU execution.
Basic time measurement techniques
Beyond the simple wall-clock time from time.time(), you can use more precise and convenient methods for more reliable performance measurements.
Using time.perf_counter() for higher precision
import time
start = time.perf_counter()
# Code to time
for i in range(1000000):
pass
end = time.perf_counter()
print(f"Execution time: {end - start:.9f} seconds")--OUTPUT--Execution time: 0.031246789 seconds
When you need more reliable performance data, switch to time.perf_counter(). It provides a high-resolution timer that's perfect for benchmarking code. Its main advantage is that it's not affected by system time updates, so your results won't be skewed by external changes.
- This function is designed to measure short intervals accurately.
- The starting point of the counter is undefined, so it's only useful for measuring differences in time.
Using the timeit module for short code snippets
import timeit
execution_time = timeit.timeit('"-".join(str(n) for n in range(100))', number=10000)
print(f"Execution time: {execution_time:.6f} seconds")--OUTPUT--Execution time: 0.654321 seconds
The timeit module is your go-to for accurately timing small pieces of code. It runs a code snippet many times to get a stable average, which minimizes the impact of background processes on your results. This makes it ideal for benchmarking isolated functions or expressions.
- The
timeit()function takes the code you want to measure as a string. - You use the
numberargument to specify how many times the snippet should run.
Creating a timer class with context manager
import time
class Timer:
def __enter__(self):
self.start = time.perf_counter()
return self
def __exit__(self, *args):
self.end = time.perf_counter()
print(f"Execution time: {self.end - self.start:.6f} seconds")
with Timer():
[i**2 for i in range(100000)]--OUTPUT--Execution time: 0.012345 seconds
For a more reusable and elegant solution, you can create a Timer class that works as a context manager. This approach wraps the timing logic within a class, so you don't have to manually call start and end functions every time.
- The
__enter__method automatically starts the timer when you enter thewithblock. - When the block finishes, the
__exit__method stops the timer and prints the elapsed time.
This keeps your main code clean and focused on its task, making your timing logic much easier to manage.
Advanced time measurement techniques
Building on the foundational timing methods, you can adopt more sophisticated approaches for measuring function-level performance, profiling entire scripts, and handling asynchronous code.
Using decorators to measure function execution time
import time
from functools import wraps
def timing_decorator(func):
@wraps(func)
def wrapper(*args, **kwargs):
start = time.perf_counter()
result = func(*args, **kwargs)
end = time.perf_counter()
print(f"{func.__name__} took {end - start:.6f} seconds")
return result
return wrapper
@timing_decorator
def slow_function():
sum([i**2 for i in range(100000)])
slow_function()--OUTPUT--slow_function took 0.015678 seconds to execute
Decorators provide a powerful and reusable way to time your functions. You can wrap any function with the timing_decorator by simply placing @timing_decorator directly above its definition. This automatically adds timing logic without cluttering the function's code.
- The
wrapperfunction inside the decorator captures the start and end times around the original function call. - Using
functools.wrapsensures the decorated function retains its original name and metadata—a best practice that helps with debugging.
Profiling code execution with cProfile
import cProfile
def function_to_profile():
total = 0
for i in range(1000000):
total += i
return total
cProfile.run('function_to_profile()')--OUTPUT--4 function calls in 0.078 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.078 0.078 <string>:1(<module>)
1 0.078 0.078 0.078 0.078 <stdin>:1(function_to_profile)
1 0.000 0.000 0.078 0.078 {built-in method builtins.exec}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
When you need a deeper look into your code's performance, the cProfile module provides a detailed breakdown. It goes beyond simple timing by analyzing every function call, helping you identify exactly which parts of your script are slowing things down.
- The report shows how many times each function was called (
ncalls). - It also distinguishes between the time spent purely within a function (
tottime) and the total time including any functions it called (cumtime).
This makes it an excellent tool for pinpointing bottlenecks in complex applications.
Measuring asynchronous code with asyncio
import asyncio
import time
async def measure_execution_time():
start = time.perf_counter()
await asyncio.sleep(0.5) # Simulating async work
end = time.perf_counter()
return end - start
async def main():
duration = await measure_execution_time()
print(f"Async operation took {duration:.6f} seconds")
asyncio.run(main())--OUTPUT--Async operation took 0.501234 seconds
Timing asynchronous code with asyncio is different because tasks don't block execution while waiting. Instead, you measure the total wall-clock time it takes for an operation to complete, including any idle time.
- You still use
time.perf_counter()to capture the start and end times around the asynchronous call. - The
awaitkeyword pauses the function until the operation—like a network request simulated byasyncio.sleep()—is finished.
This approach is perfect for gauging the real-world performance of I/O-bound tasks that spend time waiting.
Move faster with Replit
Replit is an AI-powered development platform that comes with all Python dependencies pre-installed, so you can skip setup and start coding instantly. Instead of piecing together individual techniques, you can use Agent 4 to build complete applications directly from a description.
Agent handles the entire development process, from writing code to managing databases and deploying your app. You can go from an idea to a working product by describing what you want to build. For example, you could create tools that leverage the timing methods you've just learned:
- A performance dashboard that uses timing decorators to automatically track and display the execution speed of key functions in a web application.
- A code profiler utility that visualizes
cProfileoutput to help you quickly identify and fix performance bottlenecks in your scripts. - An async task benchmark tool that measures and compares the response times of different API endpoints using
asyncioandtime.perf_counter().
Simply describe your app, and Replit will write the code, test it, and fix issues automatically, all within your browser.
Common errors and challenges
When measuring execution time, a few common pitfalls can easily skew your results and lead to inaccurate conclusions. Watch out for these frequent mistakes to ensure your measurements are accurate:
- Forgetting to reset the timer between measurements: It's a common slip-up when running multiple benchmarks in a loop. If you don't capture a new start time for each iteration, the elapsed time accumulates, rendering your comparisons useless.
- Using
time.time()for high-precision measurements: This function is sensitive to system time changes, like network time synchronization, which can throw off your results. You're better off usingtime.perf_counter()for reliable benchmarking because it provides a monotonic clock that only moves forward. - Including
print()statements in timed code: Console output is an I/O operation and is surprisingly slow. Placingprint()calls within your timed block introduces significant overhead, which will inflate the measured execution time and mask the true performance of your code.
Forgetting to reset the timer between measurements
When benchmarking multiple operations, it's easy to forget to reset your timer. If you reuse the same start variable, your second measurement will incorrectly include the time taken by the first, making your results cumulative and useless. The code below shows this mistake.
import time
# Bug: Reusing the start time for multiple measurements
start = time.perf_counter()
result1 = sum(range(1000000))
time1 = time.perf_counter() - start
result2 = [i**2 for i in range(10000)]
time2 = time.perf_counter() - start # Incorrectly includes time1
print(f"Operation 1: {time1:.6f} seconds")
print(f"Operation 2: {time2:.6f} seconds") # Wrong! Includes Operation 1 time
The time2 calculation reuses the initial start variable, so its result incorrectly includes the first operation's runtime. This makes the second measurement cumulative and inaccurate. Now, examine the correct way to structure this code.
import time
# Fixed: Reset the timer for each measurement
start = time.perf_counter()
result1 = sum(range(1000000))
time1 = time.perf_counter() - start
start = time.perf_counter() # Reset the timer
result2 = [i**2 for i in range(10000)]
time2 = time.perf_counter() - start # Correctly measures only operation 2
print(f"Operation 1: {time1:.6f} seconds")
print(f"Operation 2: {time2:.6f} seconds")
In the corrected version, the start variable is reassigned with a new call to time.perf_counter() before the second operation begins. This simple fix ensures that each measurement is independent and accurately reflects the runtime of only its corresponding code block. This mistake is especially common when you're running benchmarks in a sequence or inside a loop, so always be sure to reset your timer for each new measurement you take.
Using time.time() for high-precision measurements
While time.time() is straightforward, it's not suited for high-precision tasks. Its resolution is often too low to measure fast operations, which can result in a reported time of zero and give you a false sense of your code's performance.
The following code shows this in action. Notice how it can fail to capture the execution time of a short loop, potentially returning a misleading result of 0.0 seconds.
import time
# Bug: Using time.time() for measuring short operations
start = time.time()
result = sum(range(1000))
end = time.time()
# This might show 0.0 seconds for very fast operations
print(f"Execution time: {end - start:.6f} seconds")
Because time.time() is tied to your system's clock, it lacks the granularity for quick tasks. This can result in a misleading zero-second duration. See how to get more accurate results with the following code.
import time
# Fixed: Using time.perf_counter() for high precision
start = time.perf_counter()
result = sum(range(1000))
end = time.perf_counter()
# This provides microsecond precision
print(f"Execution time: {end - start:.9f} seconds")
The corrected code swaps time.time() for time.perf_counter() to get a more accurate measurement. This function offers high-resolution timing, which is essential for short operations that might otherwise report a misleading zero-second duration. Because time.perf_counter() uses a monotonic clock, it isn't affected by system time changes, making it the reliable choice for any performance-critical benchmarking where precision matters.
Including print() statements in timed code
It's tempting to use print() statements to track progress inside a timed loop, but this is a classic mistake. Console output is an I/O operation—much slower than in-memory computations—and will significantly inflate your execution time, skewing your results.
The following code demonstrates how including a print() call inside a loop can distort your performance measurements, making your code appear slower than it actually is.
import time
# Bug: Including print statements in the timed section
start = time.perf_counter()
numbers = []
for i in range(10000):
numbers.append(i * i)
print(f"Processed item {i}", end='\r') # Slows down execution
end = time.perf_counter()
print(f"\nExecution time: {end - start:.6f} seconds")
The print() call runs on every iteration, mixing slow I/O operations with the computation you want to measure. This makes the final time inaccurate. The following code shows how to get a clean measurement.
import time
# Fixed: Keep print statements outside timed code
start = time.perf_counter()
numbers = []
for i in range(10000):
numbers.append(i * i)
end = time.perf_counter()
print(f"Execution time: {end - start:.6f} seconds")
print(f"Processed {len(numbers)} items")
The corrected code isolates the computation by moving all print() calls outside the timed section. This gives you a clean measurement of just the list-building logic, free from the overhead of console output. Always separate your core logic from any debugging or progress-reporting print() statements when benchmarking. This ensures your results reflect the code's true performance, especially inside loops where the impact of I/O operations adds up quickly.
Real-world applications
Avoiding common timing errors prepares you to tackle real-world applications, from comparing algorithm performance to building simple API rate limiters.
Comparing sorting algorithm performance with time.perf_counter()
You can apply time.perf_counter() to see which of Python's sorting functions, sorted() or list.sort(), performs faster on the same set of data.
import time
import random
# Time and compare sorting algorithms
data = [random.randint(1, 1000) for _ in range(5000)]
start = time.perf_counter()
sorted(data)
builtin_time = time.perf_counter() - start
start = time.perf_counter()
data.sort()
inplace_time = time.perf_counter() - start
print(f"Built-in sorted(): {builtin_time:.6f} seconds")
print(f"List.sort(): {inplace_time:.6f} seconds")
This code demonstrates how to measure the performance of Python's two primary sorting methods. It first generates a list of random integers. Then, it uses time.perf_counter() to time two separate operations:
- The
sorted()function, which creates and returns a new sorted list, leaving the original untouched. - The
list.sort()method, which modifies the original list directly, or "in-place."
The script prints the execution time for each approach, showing the duration for each distinct sorting strategy.
Implementing a simple API rate limiter with time.time()
The time.time() function is also great for controlling how often your code runs, making it a straightforward choice for building a simple API rate limiter.
import time
class SimpleRateLimiter:
def __init__(self, interval):
self.interval = interval
self.last_check = 0
def limit(self):
now = time.time()
if now - self.last_check < self.interval:
time.sleep(self.interval - (now - self.last_check))
self.last_check = time.time()
# Demo rate limiting
limiter = SimpleRateLimiter(interval=0.5) # One request per 0.5 seconds
for i in range(3):
start = time.time()
limiter.limit()
print(f"Request {i+1} processed after {time.time() - start:.4f}s wait")
This code defines a SimpleRateLimiter class that throttles how often an operation can run. It’s a practical way to manage tasks like API calls without overwhelming a server.
- The
limitmethod calculates the time elapsed since its last execution usingtime.time(). - If the duration is shorter than the required
interval, it pauses the script withtime.sleep()for the remaining time. - After the check, it updates the
last_checktimestamp, effectively resetting the timer for the next operation.
Get started with Replit
Now, turn these techniques into a real tool. Describe what you want to build, like "a dashboard that uses a timing decorator to track function speed" or "an app that compares sorting algorithms using time.perf_counter()".
Replit Agent will write the code, test for errors, and deploy your application. Start building with Replit.
Create and deploy websites, automations, internal tools, data pipelines and more in any programming language without setup, downloads or extra tools. All in a single cloud workspace with AI built in.
Create and deploy websites, automations, internal tools, data pipelines and more in any programming language without setup, downloads or extra tools. All in a single cloud workspace with AI built in.

.png)

.png)