How to send an HTTP request in Python
Discover how to send HTTP requests in Python. This guide covers different methods, tips, real-world applications, and debugging.

The ability to send HTTP requests in Python is a core skill for developers. It allows applications to interact with web services and automate tasks across the internet.
In this article, you'll explore several techniques to send requests. You'll find practical tips, real-world applications, and debugging advice to help you handle any scenario.
Using the requests library for basic HTTP GET requests
import requests
response = requests.get('https://api.github.com')
print(f"Status code: {response.status_code}")
print(f"Content type: {response.headers['content-type']}")--OUTPUT--Status code: 200
Content type: application/json; charset=utf-8
The requests library is popular because it simplifies complex interactions into single function calls. The requests.get() method sends a GET request to a URL and conveniently packages the server's entire reply into a Response object. This object holds not just the content but also crucial metadata about the response.
You can easily inspect this metadata to understand the result of your request:
- The
response.status_codeattribute provides the HTTP status. A code of200means the request was successful. - The
response.headersattribute acts like a dictionary, giving you access to headers like'content-type', which tells you how to interpret the response body.
Common HTTP request methods and parameters
While requests.get() is perfect for fetching data, you'll often need to send data or customize your requests with headers and query parameters.
Making POST requests with requests
import requests
data = {'key': 'value', 'another_key': 'another_value'}
response = requests.post('https://httpbin.org/post', data=data)
print(response.json()['form'])--OUTPUT--{'key': 'value', 'another_key': 'another_value'}
When you need to send data to a server, like submitting a form, you'll use the requests.post() method. Unlike a GET request, a POST request includes a payload in its body.
- You pass a dictionary of your data to the
dataparameter. - The
requestslibrary automatically encodes this dictionary and sends it to the specified URL.
The example uses httpbin.org/post, a service that echoes back the request it receives. By calling response.json(), you can parse the server's JSON reply and confirm it received your data correctly.
Working with request headers
import requests
headers = {'User-Agent': 'Python HTTP Client', 'Accept': 'application/json'}
response = requests.get('https://httpbin.org/headers', headers=headers)
print(response.json()['headers'])--OUTPUT--{
"Accept": "application/json",
"Host": "httpbin.org",
"User-Agent": "Python HTTP Client"
}
HTTP headers are key-value pairs that let you send extra information with your request. It's how you can tell a server what kind of client you're using or what content format you'd like back.
- The
User-Agentheader identifies your application. - The
Acceptheader specifies your preferred response format, likeapplication/json.
With requests, you just pass a dictionary to the headers parameter. The example sends custom headers to httpbin.org, which echoes them back, confirming they were received correctly.
Handling query parameters
import requests
params = {'q': 'python', 'sort': 'stars'}
response = requests.get('https://api.github.com/search/repositories', params=params)
results = response.json()
print(f"Total results: {results['total_count']}")
print(f"First repository: {results['items'][0]['full_name']}")--OUTPUT--Total results: 274198
First repository: vinta/awesome-python
Query parameters let you filter or customize the data you request from an API. The requests library simplifies this process by letting you pass a dictionary to the params argument. The library automatically formats these key-value pairs into a URL query string, so you don't have to worry about encoding.
- In this example, the
paramsdictionary searches the GitHub API for repositories matching'python'. - It also sorts the results by the number of
'stars', demonstrating how you can combine multiple parameters to refine your request.
Advanced HTTP techniques in Python
While single requests are useful, real-world applications often require more sophisticated techniques for managing sessions, improving speed, and handling network errors gracefully.
Using sessions for multiple requests
import requests
session = requests.Session()
session.headers.update({'User-Agent': 'Python HTTP Tutorial'})
response1 = session.get('https://httpbin.org/cookies/set/sessioncookie/123456789')
response2 = session.get('https://httpbin.org/cookies')
print(response2.json())--OUTPUT--{
"cookies": {
"sessioncookie": "123456789"
}
}
A requests.Session object is perfect for making multiple requests to the same API. It persists information like cookies and headers, so you don't have to configure them for every call. This is essential for interacting with services that require a login or session state.
- In the example, a
Sessionis created, and its first request receives a cookie from the server. - The session automatically stores this cookie and sends it with the second request, proving that it maintains state across the interaction.
Asynchronous HTTP requests with aiohttp
import aiohttp
import asyncio
async def fetch(url):
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
return await response.text()
async def main():
result = await fetch('https://python.org')
print(f"Retrieved {len(result)} characters")
asyncio.run(main())--OUTPUT--Retrieved 49531 characters
For tasks that involve a lot of waiting, like network requests, asynchronous code can offer a major performance boost. The aiohttp library, built on Python's asyncio framework, lets you send requests without blocking your application. This means your program can work on other things while waiting for a server to respond.
- Functions marked with
async defare special coroutines that can be paused and resumed. - The
awaitkeyword tells Python to pause the function, allowing other tasks to run, and to resume only when the awaited operation completes. - An
aiohttp.ClientSessionmanages the connection, much like arequests.Session, but it's designed for asynchronous code. - Finally,
asyncio.run()starts the event loop to execute the main coroutine.
Implementing retry mechanisms with urllib3
import urllib3
from urllib3.util import Retry
from urllib3.exceptions import MaxRetryError
http = urllib3.PoolManager(retries=Retry(total=3, backoff_factor=0.5))
try:
response = http.request('GET', 'https://httpbin.org/status/500')
except MaxRetryError:
print("Failed after 3 retry attempts")--OUTPUT--Failed after 3 retry attempts
Network requests aren't always reliable, so building in a retry mechanism is a smart move. The urllib3 library makes this easy with its built-in Retry utility. You can configure a PoolManager to automatically re-attempt failed requests, which is crucial for creating resilient applications.
- The
Retryobject is configured with parameters liketotal=3, setting the maximum number of attempts. - A
backoff_factorintroduces a delay between retries, preventing you from overwhelming a struggling server. - If all retries fail,
urllib3raises aMaxRetryError, which you can catch to handle the final failure gracefully.
Move faster with Replit
Replit is an AI-powered development platform that transforms natural language into working applications. Describe what you want to build, and Replit Agent creates it—complete with databases, APIs, and deployment.
For the HTTP request techniques covered in this article, Replit Agent can turn them into production tools:
- Build a real-time stock market dashboard that fetches data from a financial API.
- Create a newsletter signup form backend that adds subscribers to a mailing list service.
- Deploy a price monitoring tool that asynchronously tracks product prices across multiple e-commerce sites.
Describe your app idea, and Replit Agent can write the code, run tests, and handle deployment for you, directly in your browser.
Common errors and challenges
Sending HTTP requests can sometimes lead to tricky situations, but knowing how to handle common errors makes all the difference.
Handling connection timeouts with the timeout parameter
A network request can sometimes hang if a server is slow or unresponsive. To prevent your application from getting stuck, you can set a timeout. The requests library lets you pass a timeout parameter with a value in seconds, which will raise an exception if the server doesn't respond in time, allowing you to handle the failure gracefully.
Dealing with SSL certificate verification using verify=False
By default, requests verifies the SSL certificate of the server to ensure your connection is secure. However, you might run into errors with internal services or development environments that use self-signed certificates. For these specific cases, you can disable this check by setting verify=False. Be very careful with this option—it makes your connection insecure and should be avoided in production environments.
Correctly sending JSON data with the json parameter
A common point of confusion is the difference between the data and json parameters in a POST request. While data sends form-encoded data, modern APIs almost always expect JSON. Instead of manually encoding your dictionary and setting headers, you can simply pass your dictionary to the json parameter. The requests library handles the rest, correctly setting the Content-Type header to application/json for you.
Handling connection timeouts with the timeout parameter
If you don't set a timeout, your application can get stuck waiting indefinitely for a slow server. It's a common bug that's easy to miss during development. The following code demonstrates how a simple requests.get() call can freeze.
import requests
def get_data(url):
response = requests.get(url)
return response.json()
# This could hang indefinitely if the server is slow
data = get_data('https://example.com/api/data')
print(data)
The get_data function makes a blocking call that will pause your program until the server responds. If the server never replies, your application freezes. The following example shows how to prevent this from happening.
import requests
def get_data(url):
response = requests.get(url, timeout=5) # 5 second timeout
return response.json()
try:
data = get_data('https://example.com/api/data')
print(data)
except requests.exceptions.Timeout:
print("Request timed out. The server might be slow or unavailable.")
By adding the timeout parameter to your requests.get() call, you set a deadline for the server to respond. If the server takes too long, requests raises a Timeout exception, which you can catch with a try...except block.
This prevents your application from freezing and allows you to handle the failure gracefully. It's a crucial practice for any code making external network requests, since you can't control the server's responsiveness.
Dealing with SSL certificate verification using verify=False
When you're developing locally, you might use a self-signed SSL certificate, which isn't trusted by default. The requests library will raise an SSLError because it can't verify the server's identity, protecting you from potential man-in-the-middle attacks. The following code demonstrates this common issue.
import requests
# This will fail if the site has SSL certificate issues
response = requests.get('https://localhost:8000/api')
print(response.status_code)
This simple requests.get() call fails because it's trying to connect to a local server with an untrusted certificate. The following example shows how you can work around this for development purposes.
import requests
# For development only - NOT recommended for production!
response = requests.get('https://localhost:8000/api', verify=False)
print(response.status_code)
# Better approach for production - specify a CA bundle
# response = requests.get('https://example.com/api', verify='/path/to/certfile')
By setting verify=False, you can bypass the security check, which is useful for connecting to local servers during development. However, this approach is insecure and shouldn't be used in production. For a production environment with a custom certificate, the correct method is to provide the path to your certificate bundle file. This ensures the connection is both secure and properly verified.
Correctly sending JSON data with the json parameter
It's a common mistake to use the data parameter when an API expects JSON. The requests library sends this as form-encoded content, not the JSON payload the server needs, which can lead to errors. The code below demonstrates this frequent pitfall.
import requests
data = {'name': 'John', 'age': 30}
# This sends form data, not JSON
response = requests.post('https://api.example.com/users', data=data)
print(response.status_code)
The API can't parse the form-encoded payload sent by the data parameter, leading to a request failure. The following code shows how to ensure the dictionary is sent as a proper JSON object.
import requests
data = {'name': 'John', 'age': 30}
# Use json parameter to send JSON data
response = requests.post('https://api.example.com/users', json=data)
print(response.status_code)
By passing your dictionary to the json parameter, you let the requests library handle the details. It automatically serializes the data to a JSON string and sets the Content-Type header to application/json. This ensures the server can correctly parse your payload. You'll want to use the json parameter whenever an API expects a JSON object. This is standard for most modern POST or PUT requests and prevents common server-side errors.
Real-world applications
Combining tools like requests and the timeout parameter, you can build practical applications that interact with live web services.
Tracking the International Space Station with requests
A simple requests.get() call to the Open Notify API is all it takes to find the International Space Station's current latitude and longitude.
import requests
response = requests.get('http://api.open-notify.org/iss-now.json')
location = response.json()
latitude = location['iss_position']['latitude']
longitude = location['iss_position']['longitude']
print(f"The ISS is currently at latitude {latitude}, longitude {longitude}")
This example shows how to pull live data from a public API. The requests.get() function fetches information from the Open Notify service, which provides the ISS's real-time location as a JSON object.
The process is straightforward:
- The
response.json()method parses the server's reply into a Python dictionary, making it easy to work with. - You can then navigate this dictionary using keys like
['iss_position']to access nested data. - Finally, the code extracts the latitude and longitude values and prints them in a user-friendly format.
Creating a simple website monitoring tool with timeout parameter
You can combine a simple loop with the timeout parameter to build a basic tool that periodically checks whether a website is online and responsive.
import requests
import time
url = "https://www.python.org"
check_interval = 2 # seconds between checks
max_checks = 2
for i in range(max_checks):
try:
print(f"Check {i+1}: Requesting {url}")
response = requests.get(url, timeout=5)
print(f"Status: {response.status_code}, Length: {len(response.text)} characters")
except requests.RequestException as e:
print(f"Error: {e}")
if i < max_checks - 1:
print(f"Waiting {check_interval} seconds...")
time.sleep(check_interval)
This script repeatedly checks a website's availability. It uses a for loop to send multiple requests with a pause between each one, controlled by time.sleep().
- The
requests.get()call includes atimeout=5parameter, which prevents the program from waiting more than five seconds for a slow server. - A
try...exceptblock gracefully handles any network issues. Instead of crashing, it catches therequests.RequestExceptionand prints the error message, allowing the loop to continue.
Get started with Replit
Turn what you've learned into a real tool. Tell Replit Agent to “build a website uptime checker” or “create a dashboard that tracks cryptocurrency prices from a public API”.
Replit Agent writes the code, tests for errors, and deploys your app for you. Start building with Replit.
Create and deploy websites, automations, internal tools, data pipelines and more in any programming language without setup, downloads or extra tools. All in a single cloud workspace with AI built in.
Create & deploy websites, automations, internal tools, data pipelines and more in any programming language without setup, downloads or extra tools. All in a single cloud workspace with AI built in.



.png)