How to run a curl command in Python
Learn how to run curl commands in Python. Discover different methods, tips, real-world applications, and how to debug common errors.

You can run curl commands in Python to automate web requests and data transfers. Python offers several libraries to execute these commands, which simplifies complex network operations within your scripts.
You will explore various techniques to run curl commands, complete with practical tips for implementation. The article also covers real-world applications and debugging advice to master your Python network interactions.
Using the requests library for basic HTTP requests
import requests
response = requests.get('https://httpbin.org/get')
print(response.status_code)
print(response.json())--OUTPUT--200
{'args': {}, 'headers': {'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate', 'Host': 'httpbin.org', 'User-Agent': 'python-requests/2.28.1', 'X-Amzn-Trace-Id': 'Root=1-abc123def456'}, 'origin': '203.0.113.1', 'url': 'https://httpbin.org/get'}
Instead of directly running curl, you can use Python's requests library for a more native approach to HTTP requests. The code demonstrates this by sending a GET request to https://httpbin.org/get, which mirrors what a basic curl command does.
- The
response.status_codegives you the HTTP status, confirming if the request succeeded. - The
response.json()method is particularly useful—it parses the JSON response directly into a Python dictionary, saving you a manual decoding step.
Standard library approaches
Beyond third-party libraries like requests, you can use Python's built-in modules or a direct wrapper like pycurl for handling your network requests.
Using urllib for HTTP requests
import urllib.request
import json
with urllib.request.urlopen('https://httpbin.org/get') as response:
data = json.loads(response.read().decode('utf-8'))
print(data)--OUTPUT--{'args': {}, 'headers': {'Accept-Encoding': 'identity', 'Host': 'httpbin.org', 'User-Agent': 'Python-urllib/3.9', 'X-Amzn-Trace-Id': 'Root=1-abc123def456'}, 'origin': '203.0.113.1', 'url': 'https://httpbin.org/get'}
Python's built-in urllib module offers another way to handle HTTP requests without external dependencies. The urllib.request.urlopen() function sends the request and returns a response object.
- Unlike the
requestslibrary, processing the response requires a few extra steps. You must first read the raw response bytes withresponse.read(), decode them into a string using.decode('utf-8'), and finally parse the JSON string into a dictionary withjson.loads().
Using http.client for lower-level control
import http.client
import json
conn = http.client.HTTPSConnection("httpbin.org")
conn.request("GET", "/get")
response = conn.getresponse()
data = json.loads(response.read().decode())
print(f"Status: {response.status}, Data: {data}")
conn.close()--OUTPUT--Status: 200, Data: {'args': {}, 'headers': {'Host': 'httpbin.org', 'User-Agent': 'Python-http.client/3.9', 'X-Amzn-Trace-Id': 'Root=1-abc123def456'}, 'origin': '203.0.113.1', 'url': 'https://httpbin.org/get'}
For more granular control over your HTTP requests, you can use Python's http.client module. It operates at a lower level than urllib, meaning you manage each step of the connection process yourself. This approach offers more power but requires more explicit code.
- You first establish a connection with
http.client.HTTPSConnection()before sending the request. - After sending the request with
conn.request(), you retrieve the response usingconn.getresponse(). - Crucially, you must remember to close the connection manually with
conn.close()once you're done.
Using pycurl for curl-like functionality
import pycurl
from io import BytesIO
buffer = BytesIO()
c = pycurl.Curl()
c.setopt(c.URL, 'https://httpbin.org/get')
c.setopt(c.WRITEDATA, buffer)
c.perform()
c.close()
print(buffer.getvalue().decode())--OUTPUT--{
"args": {},
"headers": {
"Accept": "*/*",
"Host": "httpbin.org",
"User-Agent": "PycURL/7.45.1",
"X-Amzn-Trace-Id": "Root=1-abc123def456"
},
"origin": "203.0.113.1",
"url": "https://httpbin.org/get"
}
The pycurl library is a Python interface for libcurl, offering a way to make requests that closely mirrors the command-line tool. It gives you fine-grained control over the request process.
- You start by creating an in-memory binary stream with
BytesIOto capture the response. - Next, you configure a
Curlobject usingsetoptto define the URL and tell it where to write the data—in this case, your buffer. - Finally,
perform()sends the request, and you must remember to callclose()to terminate the session.
Advanced techniques
While Python-native libraries are powerful, you can also execute curl commands directly using subprocess, handle POST requests, or manage asynchronous calls for more complex workflows.
Running actual curl commands with subprocess
import subprocess
import json
result = subprocess.run(['curl', '-s', 'https://httpbin.org/get'], capture_output=True, text=True)
data = json.loads(result.stdout)
print(f"Status code in headers: {data.get('headers', {}).get('X-Amzn-Trace-Id')}")--OUTPUT--Status code in headers: Root=1-abc123def456
The subprocess module allows you to execute shell commands directly from your Python script. You can use the subprocess.run() function to run the command, which you pass as a list of strings like ['curl', '-s', 'https://httpbin.org/get']. This approach is useful when you need the exact behavior of the curl command-line tool.
- The argument
capture_output=Trueis key—it ensures the command's output is captured for use in your script. - Setting
text=Truedecodes the output into a standard string, making it easier to work with. - You can then access the captured output via the
result.stdoutattribute and process it, for example, by parsing it withjson.loads().
Making POST requests with requests library
import requests
payload = {'key1': 'value1', 'key2': 'value2'}
headers = {'Content-Type': 'application/json', 'User-Agent': 'MyCustomAgent/1.0'}
response = requests.post('https://httpbin.org/post', json=payload, headers=headers)
print(response.json()['json'])--OUTPUT--{'key1': 'value1', 'key2': 'value2'}
Sending data is straightforward with the requests.post() method. You simply package your data into a Python dictionary, which serves as the request's payload. The library handles the heavy lifting for you.
- Passing your dictionary to the
jsonparameter automatically converts it to a JSON string and sets the correctContent-Typeheader. - You can also include custom
headersto specify details like a uniqueUser-Agent, giving you more control over the request.
Asynchronous HTTP requests with aiohttp
import aiohttp
import asyncio
async def fetch_data():
async with aiohttp.ClientSession() as session:
async with session.get('https://httpbin.org/get') as response:
data = await response.json()
return data['headers']['Host']
print(asyncio.run(fetch_data()))--OUTPUT--httpbin.org
For handling multiple requests concurrently without blocking your program, you can use the aiohttp library. It integrates with Python's native asyncio framework, allowing you to manage network operations efficiently. This approach is ideal when you need to perform many I/O-bound tasks, like making several API calls at once.
- The function is defined with
async def, marking it as a coroutine that can be paused and resumed. - Inside,
async withstatements manage theaiohttp.ClientSessionand the request, ensuring resources are cleaned up properly. - The
awaitkeyword pauses the function until the network request is complete and the JSON is parsed. - Finally,
asyncio.run()starts the event loop and executes your coroutine.
Move faster with Replit
Replit is an AI-powered development platform that transforms natural language into working applications. Describe what you want to build, and Replit Agent creates it—complete with databases, APIs, and deployment.
For the HTTP request techniques covered in this article, Replit Agent can turn them into production-ready tools:
- Build a real-time stock tracker that fetches data from a financial API using
requests. - Create a social media bot that automatically posts updates to an API with custom headers.
- Deploy a web scraper that concurrently gathers data from multiple websites using
aiohttp.
Describe your app idea, and Replit Agent writes the code, tests it, and fixes issues automatically, all in your browser. Try building your next tool with Replit Agent.
Common errors and challenges
Even with the right tools, you can run into common pitfalls like network timeouts, HTTP errors, and SSL certificate problems.
Handling timeouts with the requests library
Network requests can sometimes hang if a server is slow to respond, potentially freezing your application. By default, the requests library will wait indefinitely for a response. The following code shows how a simple get request can get stuck.
import requests
def fetch_data(url):
response = requests.get(url)
return response.json()
# This might hang indefinitely if the server is slow
data = fetch_data('https://httpbin.org/delay/10')
print(data)
The fetch_data function calls a URL designed to delay its response by ten seconds. Since no timeout is specified, your script will pause for the entire duration. The corrected code below shows how to prevent this.
import requests
def fetch_data(url):
response = requests.get(url, timeout=5)
return response.json()
try:
data = fetch_data('https://httpbin.org/delay/10')
print(data)
except requests.exceptions.Timeout:
print("The request timed out")
To prevent your application from hanging, you can add a `timeout` parameter to your `requests.get()` call. This sets a maximum wait time in seconds. If the server fails to respond within this window, a `requests.exceptions.Timeout` error is raised.
By wrapping the request in a `try...except` block, you can catch this error and handle it gracefully, preventing your script from crashing. It's a crucial safeguard for any application making external network calls where response times are unpredictable.
Proper error handling for HTTP status codes
When an HTTP request fails, the server returns an error status code like 404 (Not Found). If your code doesn't check the response status first, it can cause unexpected errors when trying to parse a non-existent JSON body. The following code demonstrates this exact problem.
import requests
response = requests.get('https://httpbin.org/status/404')
data = response.json() # Will raise an exception for 404 response
print(data)
This code raises an exception because it calls response.json() on a 404 error, which has no JSON body to parse. The corrected approach below demonstrates how to first verify the response status before proceeding.
import requests
response = requests.get('https://httpbin.org/status/404')
if response.status_code == 200:
data = response.json()
print(data)
else:
print(f"Error: Received status code {response.status_code}")
A more robust solution is to check the response.status_code before trying to parse the body. An if statement can confirm the request was successful—a status code of 200—before you call response.json(). This simple check prevents your script from crashing when it receives an error code like 404 (Not Found). It's a fundamental practice for building reliable applications that interact with external APIs, where responses can be unpredictable.
Dealing with SSL certificate verification issues
When you make an HTTPS request, your client verifies the server's SSL certificate to ensure a secure connection. If the certificate is expired or invalid, libraries like requests will raise an error by default. It's a crucial security feature.
The following code attempts to connect to a site with an expired certificate, which triggers a verification error.
import requests
# This will fail if the site has SSL certificate issues
response = requests.get('https://expired.badssl.com/')
print(response.text)
The requests.get() call targets a site with a known bad certificate, which triggers the library’s default security check and causes the connection to fail. The following code shows how you can manage this verification process for specific cases.
import requests
# Option 1: Disable verification (only use when necessary)
response = requests.get('https://expired.badssl.com/', verify=False)
print("Warning: SSL verification disabled")
print(response.status_code)
While disabling SSL verification is a security risk, you can bypass it for trusted servers by setting verify=False in your request. This is often necessary when dealing with internal development environments or legacy systems that use self-signed certificates.
- This option tells the
requestslibrary to ignore certificate errors and proceed with the connection. - Use it with extreme caution, as it exposes your application to potential man-in-the-middle attacks.
Real-world applications
Applying these techniques, you can move beyond theory to build practical tools for fetching data and monitoring websites.
Fetching weather data with the requests library
A practical example is using the requests library to fetch live weather data from a public API.
import requests
api_key = "demo_key" # Replace with your actual API key
city = "London"
url = f"https://api.openweathermap.org/data/2.5/weather?q={city}&appid={api_key}&units=metric"
response = requests.get(url)
weather_data = response.json()
print(f"Current temperature in {city}: {weather_data['main']['temp']}°C")
print(f"Weather condition: {weather_data['weather'][0]['description']}")
This code demonstrates how to fetch data from a web API. It dynamically constructs a request URL using an f-string, embedding variables like city and your api_key directly into the string. This keeps your code clean and readable when dealing with complex API endpoints.
- After sending the request with
requests.get(), theresponse.json()method parses the JSON data into a Python dictionary. - You can then access nested data points using standard dictionary keys, making it simple to extract the exact information you need from the API's response.
Building a simple website monitoring system
By combining requests with the time module, you can create a simple script to monitor a list of websites for uptime and performance.
import requests
import time
from datetime import datetime
websites = ["https://www.google.com", "https://www.github.com", "https://www.python.org"]
def check_website(url):
try:
start_time = time.time()
response = requests.get(url, timeout=5)
response_time = time.time() - start_time
return response.status_code, response_time
except requests.RequestException:
return None, None
for site in websites:
status_code, response_time = check_website(site)
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
if status_code:
print(f"{timestamp} - {site}: Status {status_code}, Response time: {response_time:.2f}s")
else:
print(f"{timestamp} - {site}: DOWN")
This script systematically checks the status of each URL in the websites list. The check_website function is the core of the monitor; it times how long a GET request takes and captures the server's response.
- If a request fails for any reason, the
exceptblock catches the error and returnsNone, preventing the script from crashing. - The main loop then prints a formatted, timestamped log, clearly indicating whether each site is up or down and how quickly it responded.
Get started with Replit
Turn what you've learned into a real tool. Describe your idea to Replit Agent, like “build a website status checker that pings URLs” or “create a currency converter using a public API.”
Replit Agent writes the code, tests for errors, and deploys your app from a simple prompt. Start building with Replit.
Create and deploy websites, automations, internal tools, data pipelines and more in any programming language without setup, downloads or extra tools. All in a single cloud workspace with AI built in.
Create & deploy websites, automations, internal tools, data pipelines and more in any programming language without setup, downloads or extra tools. All in a single cloud workspace with AI built in.



.png)