How to measure time in Python
Learn how to measure time in Python. Discover different methods, tips, real-world applications, and how to debug common timing errors.

Measuring time in Python is crucial for performance analysis, code optimization, and task scheduling. Python offers built-in modules that provide precise, flexible ways to track execution time.
In this article, you'll explore several techniques for time measurement. You'll find practical tips, real-world applications, and debugging advice to help you select the right approach for your use case.
Using the time module for basic timing
import time
start_time = time.time()
time.sleep(1) # Simulating work
end_time = time.time()
print(f"Execution time: {end_time - start_time:.4f} seconds")--OUTPUT--Execution time: 1.0010 seconds
The time module offers a straightforward way to clock execution speed. For a comprehensive guide on using time in Python, explore different approaches. The key is the time.time() function, which returns the current time as a floating-point number representing seconds since the epoch—a universal reference point.
By capturing this value before and after your code runs, you can calculate the elapsed time. The logic is simple:
- Call
time.time()to get a starting timestamp. - Execute the code you want to measure.
- Call
time.time()again for an ending timestamp.
The difference between these two timestamps is your code's execution time in seconds.
Standard time measurement techniques
While time.time() is great for quick checks, Python offers more specialized tools for benchmarking, high-precision tasks, and working with calendar time.
Measuring time with the datetime module
from datetime import datetime
start_time = datetime.now()
for _ in range(1000000):
pass
end_time = datetime.now()
print(f"Execution time: {(end_time - start_time).total_seconds():.4f} seconds")--OUTPUT--Execution time: 0.0456 seconds
The datetime module provides a more object-oriented way to handle time. Unlike time.time(), which returns a float, datetime.now() returns a datetime object that bundles date and time information together. Subtracting two of these objects creates a timedelta object, which represents a duration rather than a specific point in time. The output examples throughout this article use f-string formatting for clean time display.
- To convert this duration into a usable number, you call the
.total_seconds()method on thetimedeltaobject. - This approach is especially intuitive when your calculations involve calendar-aware logic, not just simple performance timing.
Benchmarking with the timeit module
import timeit
execution_time = timeit.timeit('"-".join(str(n) for n in range(100))', number=10000)
print(f"Execution time: {execution_time:.4f} seconds")--OUTPUT--Execution time: 0.2876 seconds
For serious benchmarking, the timeit module is your best bet. It’s designed to measure small code snippets with high accuracy by running them multiple times—in this case, 10,000 times as set by the number parameter. This repetition helps average out system noise for a more reliable result.
- The first argument to
timeit.timeit()is the code you want to test, passed as a string. - The function returns the total time it took to execute the code for the specified number of loops.
High-precision timing with time.perf_counter()
import time
start_time = time.perf_counter()
[i**2 for i in range(10000)]
end_time = time.perf_counter()
print(f"Execution time: {end_time - start_time:.8f} seconds")--OUTPUT--Execution time: 0.00123456 seconds
When you need maximum precision for measuring short durations, time.perf_counter() is the right tool for the job. It provides access to a high-resolution monotonic clock—a timer that only ever moves forward, making it ideal for memory-efficient benchmarking.
- Unlike
time.time(), its measurements aren't thrown off by system time adjustments like daylight saving changes. - The clock’s starting point is undefined, so its value is only meaningful when you subtract a start time from an end time to find the duration.
Advanced time measurement approaches
Beyond these standard functions, you can create more sophisticated, reusable timing tools or use profilers to analyze your code's performance in greater detail.
Creating a timing context manager
import contextlib
import time
@contextlib.contextmanager
def timer():
start = time.perf_counter()
yield
end = time.perf_counter()
print(f"Execution time: {end - start:.4f} seconds")
with timer():
sum(i**2 for i in range(100000))--OUTPUT--Execution time: 0.0123 seconds
A context manager offers a clean, reusable way to time code blocks. It wraps the timing logic around your code using a with statement, so you don’t have to manually place start and end time calls every time.
- The
@contextlib.contextmanagerdecorator turns a generator function into a context manager. - Code before the
yieldkeyword runs when entering thewithblock, starting thetime.perf_counter()clock. - Code after
yieldexecutes upon exiting the block, where it captures the end time and prints the duration.
Building a timing decorator
import functools
import time
def timing_decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
start = time.perf_counter()
result = func(*args, **kwargs)
end = time.perf_counter()
print(f"{func.__name__} took {end - start:.4f} seconds")
return result
return wrapper
@timing_decorator
def calculate_squares(n):
return [i**2 for i in range(n)]
calculate_squares(100000)--OUTPUT--calculate_squares took 0.0123 seconds
A decorator is a powerful way to add functionality to an existing function. The timing_decorator wraps another function, allowing you to measure its execution time without modifying its internal logic. You can build similar tools quickly with vibe coding, simply applying it with the @timing_decorator syntax above your function definition. Understanding using return in Python is crucial for decorator implementation.
- The
wrapperfunction inside the decorator records the start time, runs the original function, and then records the end time before printing the duration. - Using
@functools.wrapsis crucial—it ensures the decorated function keeps its original name and other metadata. - The decorator also returns the original function's result, so it doesn't interfere with how the function is used elsewhere.
Profiling code with cProfile
import cProfile
def complex_operation():
return sum(i**3 for i in range(100000))
cProfile.run('complex_operation()')--OUTPUT--4 function calls in 0.021 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.021 0.021 <string>:1(<module>)
1 0.021 0.021 0.021 0.021 <stdin>:1(complex_operation)
1 0.000 0.000 0.021 0.021 {built-in method builtins.exec}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
When you need to know where your code is spending its time, not just how much time it takes overall, cProfile is the tool. It’s a built-in profiler that gives you a function-by-function breakdown of performance, helping you find bottlenecks.
By calling cProfile.run() with a string of code, you get a detailed report. The key columns are:
ncalls: The number of times each function was called.tottime: The total time spent within a given function, excluding sub-calls.cumtime: The cumulative time spent in a function and all its sub-functions.
Move faster with Replit
Replit is an AI-powered development platform where all Python dependencies pre-installed, so you can skip setup and start coding instantly. Instead of just timing individual functions, you can use Agent 4 to build a complete application from a simple description. It handles writing the code, connecting to databases, and even deployment.
Instead of piecing together timing techniques, you can describe the performance-focused tool you want to build, and the Agent will construct it:
- A performance dashboard that automatically benchmarks different function implementations using
timeitand displays the fastest one. - A custom logger that uses a timing decorator to record and report the execution time of critical functions in your application.
- A bottleneck finder that runs
cProfileon a block of code and summarizes which functions are taking the most time.
Simply describe your app, and Replit will write the code, test it, and fix issues automatically, all within your browser.
Common errors and challenges
Navigating Python's timing tools requires care, as a few common mistakes can easily lead to inaccurate results.
Avoiding timer reuse errors with time.time()
When using time.time(), it’s easy to accidentally reuse the same variable for multiple start times, especially within a loop. This mistake overwrites your initial timestamp, making it seem like each operation took almost no time. To get accurate results, always assign the start time to a fresh variable for each distinct measurement.
Accounting for function call overhead with timeit
While timeit is excellent for precision, its measurements include the overhead of the function call itself. For extremely fast operations, this overhead can represent a significant portion of the reported time. This is precisely why timeit runs code snippets thousands of times—it averages out this noise to give you a more realistic performance figure for the code alone.
Using time.process_time() vs time.perf_counter() correctly
Choosing between time.process_time() and time.perf_counter() depends entirely on what you need to measure. They serve different purposes and aren't interchangeable.
time.process_time()measures only the CPU time your process uses, ignoring periods when it's idle (like duringtime.sleep()). It’s best for gauging how much CPU work a specific algorithm performs.time.perf_counter()measures wall-clock time, which is the total real-world duration from start to finish. It includes everything, even time spent sleeping or waiting for I/O, making it the right choice for measuring a task's overall runtime.
Avoiding timer reuse errors with time.time()
When timing multiple operations back-to-back, it's crucial to reset your starting point for each one. If you reuse the initial timestamp, your second measurement will be wrong because it includes the first operation's runtime. Notice how this plays out below.
import time
start_time = time.time()
result1 = [i**2 for i in range(10000)]
print(f"First operation: {time.time() - start_time:.6f} seconds")
result2 = [i**3 for i in range(10000)]
print(f"Second operation: {time.time() - start_time:.6f} seconds") # Wrong
The second operation's reported time is inflated because it's measured from the initial start_time, bundling both tasks together. The corrected approach below shows how to time each operation independently for an accurate comparison.
import time
start_time1 = time.time()
result1 = [i**2 for i in range(10000)]
print(f"First operation: {time.time() - start_time1:.6f} seconds")
start_time2 = time.time()
result2 = [i**3 for i in range(10000)]
print(f"Second operation: {time.time() - start_time2:.6f} seconds")
The solution is to reset your timer for each task. By assigning the start time to a new variable—like start_time1 and start_time2—you ensure each operation is timed independently. This prevents the runtime of one task from bleeding into the next. It's a simple but crucial step for accurately comparing the performance of sequential code blocks, especially when you're trying to pinpoint which part of your script is slower.
Accounting for function call overhead with timeit
The timeit module is precise, but it measures more than just your code. It also includes the time spent on the function call itself. For very fast operations, this overhead can skew the results, making your code seem slower. The example below shows this effect.
import timeit
def operation():
return sum(i**2 for i in range(1000))
# Includes function call overhead
time_taken = timeit.timeit('operation()', globals=globals(), number=1000)
print(f"Time: {time_taken:.6f} seconds")
The measurement is slightly inflated because timeit executes the string 'operation()', which bundles the function invocation with the code's actual work. The following example shows how to isolate the logic for a more accurate benchmark.
import timeit
# Direct timing avoids function call overhead
time_taken = timeit.timeit('sum(i**2 for i in range(1000))', number=1000)
print(f"Time with direct code: {time_taken:.6f} seconds")
Passing the code directly to timeit.timeit() as a string gives you a more accurate benchmark. This method strips away the function call overhead, which can otherwise inflate the timing for quick operations. You're measuring just the logic, not the invocation. This is crucial for micro-benchmarking, where you need to isolate the performance of a specific code snippet. It ensures your results reflect the code's true execution speed.
Using time.process_time() vs time.perf_counter() correctly
Choosing between time.process_time() and time.perf_counter() is crucial because they measure different things. The first tracks only CPU activity, ignoring idle time, while the second measures total elapsed time. Using the wrong one gives you misleading results.
The following code shows what happens when you try to measure a sleep duration with time.process_time(). Because sleeping doesn't use the CPU, the result is nearly zero—a completely inaccurate measurement of the total wait time.
import time
# Wrong: using process_time for wall-clock time
start = time.process_time()
time.sleep(1) # Sleep won't be counted
end = time.process_time()
print(f"Sleep time: {end - start:.6f} seconds") # Nearly 0
Since time.process_time() only measures CPU activity, it ignores the idle time from time.sleep(), resulting in a misleadingly short duration. The following example demonstrates the proper way to measure total elapsed time.
import time
# Correct: perf_counter for wall-clock time
start = time.perf_counter()
time.sleep(1)
end = time.perf_counter()
print(f"Sleep time: {end - start:.6f} seconds") # ~1 second
Using time.perf_counter() is the correct approach for measuring total elapsed time because it tracks wall-clock duration. This is crucial for tasks that aren't purely CPU-bound.
- It accurately measures operations that include idle time, like
time.sleep()or waiting for I/O. - Choose this function whenever you need to know the real-world runtime from start to finish, not just the time your CPU was active.
Real-world applications
With the common pitfalls covered, you can now apply these timing tools to compare list.sort() and sorted() or measure web API response times. AI coding can help you build more sophisticated timing applications beyond these basic examples.
Comparing list.sort() and sorted() performance
Although list.sort() and sorted() both arrange your data, their performance isn't identical since one operates in-place and the other creates a new list.
import time
import random
# Create test data - a list of 10000 random numbers
data = [random.randint(1, 1000) for _ in range(10000)]
# Time built-in sort vs sorted()
test_list1 = data.copy()
start = time.perf_counter()
test_list1.sort()
sort_time = time.perf_counter() - start
test_list2 = data.copy()
start = time.perf_counter()
sorted_list = sorted(test_list2)
sorted_time = time.perf_counter() - start
print(f"list.sort() time: {sort_time:.8f} seconds")
print(f"sorted() time: {sorted_time:.8f} seconds")
This code sets up a performance test to compare Python's two primary sorting methods. It uses time.perf_counter() to precisely measure the execution speed of list.sort() against the sorted() function on identical datasets.
- First, it generates a list of 10,000 random numbers to serve as the test data.
- To ensure a fair comparison, it uses
data.copy()so each function sorts a fresh, identical list.
The final output prints the time taken for each operation, allowing for a direct speed comparison on the given data.
Measuring web API response times with requests
By pairing time.perf_counter() with the requests library, you can easily measure how long it takes to fetch data from an external web API. For a deeper understanding of calling APIs in Python, you can explore various request methods and techniques.
import time
import requests
# Define endpoints to test
endpoints = {
"todos": "https://jsonplaceholder.typicode.com/todos",
"users": "https://jsonplaceholder.typicode.com/users",
"posts": "https://jsonplaceholder.typicode.com/posts"
}
# Measure and compare response times
for name, url in endpoints.items():
start = time.perf_counter()
response = requests.get(url)
end = time.perf_counter()
print(f"{name}: {end - start:.4f} seconds, {len(response.json())} items")
This script benchmarks the response time for several API endpoints. It loops through a dictionary of URLs, using time.perf_counter() to precisely clock the duration of each requests.get() call. It's a practical way to check the performance of external services.
- A timestamp is captured right before the network request is sent.
- Another is taken immediately after the response arrives.
- The difference between them reveals the total time for each API call, helping you understand network latency and server speed.
Get started with Replit
Put your knowledge into practice. Describe a tool to Replit Agent, like “a script to benchmark two functions with timeit” or “an app that measures API response times and displays them in a table.”
The Agent writes the code, tests for errors, and deploys your application. All you need is an idea. Start building with Replit.
Describe what you want to build, and Replit Agent writes the code, handles the infrastructure, and ships it live. Go from idea to real product, all in your browser.
Describe what you want to build, and Replit Agent writes the code, handles the infrastructure, and ships it live. Go from idea to real product, all in your browser.



