How to run a Linux command in Python
Learn how to run Linux commands in Python. Explore different methods, tips, real-world applications, and how to debug common errors.
.png)
The ability to run Linux commands from Python helps automate system tasks and streamline workflows. Python provides powerful modules that execute shell commands directly within your scripts for greater flexibility and control.
In this article, we'll cover several techniques to execute commands, complete with practical tips for smooth implementation. We will also explore real-world applications and provide debugging advice to help you resolve common issues.
Using os.system() to execute a Linux command
import os
exit_code = os.system('ls -l')
print(f"Command exited with code: {exit_code}")--OUTPUT--total 20
-rw-r--r-- 1 user user 302 Oct 10 14:23 example.py
-rw-r--r-- 1 user user 1240 Oct 9 09:15 README.md
Command exited with code: 0
The os.system() function provides a simple method for executing a shell command by passing the command string to your system's shell. It's important to note that the function doesn't return the command's output. Instead, it returns the exit code, which signals whether the command was successful.
As you can see from the output, the result of ls -l is printed directly to the console. The final line confirms the exit code is 0, the standard indicator for a successful execution. If your goal is to capture the command's output as a string for further processing in your script, you'll need to use a different module.
Basic subprocess techniques
To overcome the limitations of os.system(), the subprocess module provides several functions that give you greater control over command execution and its output.
Using subprocess.run() for better control
import subprocess
result = subprocess.run(['ls', '-l'], capture_output=True, text=True)
print(result.stdout[:50]) # Print first 50 chars of output--OUTPUT--total 20
-rw-r--r-- 1 user user 302 Oct 10 14:2
The subprocess.run() function is a modern and more powerful way to execute commands. Unlike os.system(), it gives you fine-grained control. Notice the command is passed as a list of strings, which is a safer practice that helps prevent security issues. The function returns a CompletedProcess object containing details about the execution.
- Setting
capture_output=Truetells Python to grab the command's output so you can use it in your script. - Using
text=Truedecodes that output into a regular string, making it ready for processing.
You can then access the captured output through the stdout attribute of the result object.
Using subprocess.Popen() for non-blocking execution
import subprocess
process = subprocess.Popen(['echo', 'Hello from Python!'], stdout=subprocess.PIPE, text=True)
output, error = process.communicate()
print(output)--OUTPUT--Hello from Python!
For tasks that shouldn't pause your script, subprocess.Popen() is the tool for the job. It launches a command in a new process and immediately lets your script continue, which is known as non-blocking execution.
- You can redirect the command's output by setting
stdout=subprocess.PIPE. - The
process.communicate()method then waits for the command to finish and captures its output and error streams for you to use.
Using check_output() for direct result access
import subprocess
output = subprocess.check_output(['uname', '-a'], text=True)
print(f"System info: {output.strip()}")--OUTPUT--System info: Linux hostname 5.15.0-56-generic #62-Ubuntu SMP x86_64 GNU/Linux
When you just need a command’s output and nothing else, subprocess.check_output() is a convenient shortcut. It runs the command and returns its output directly as a string, provided you set text=True. This approach simplifies your code since you don't have to parse a larger result object.
- The main difference from
subprocess.run()is its built-in error checking. - If the command returns a non-zero exit code—signaling an error—
check_output()will automatically raise aCalledProcessError, stopping your script unless you handle the exception.
Advanced command execution
Now that you can run commands and capture their output, you can master advanced techniques for handling errors, using shell features, and managing environments.
Capturing exit codes and handling errors
import subprocess
try:
subprocess.run(['ls', '/nonexistent'], check=True)
except subprocess.CalledProcessError as e:
print(f"Command failed with exit code {e.returncode}")--OUTPUT--Command failed with exit code 2
You can build more resilient scripts by actively handling command failures. The subprocess.run() function makes this straightforward with its check=True parameter. If a command returns a non-zero exit code, which signals an error, Python will automatically raise a CalledProcessError.
- Wrap your command in a
try...exceptblock to catch theCalledProcessErrorand prevent your script from crashing. - The exception object itself contains useful information, allowing you to access details like the command's
returncode.
Working with shell features and pipes
import subprocess
cmd = "ps aux | grep python | wc -l"
result = subprocess.run(cmd, shell=True, capture_output=True, text=True)
print(f"Number of Python processes: {result.stdout.strip()}")--OUTPUT--Number of Python processes: 3
Sometimes you need to run a command that uses shell features, like pipes (|), to chain multiple operations together. The subprocess module handles this by letting you pass the entire command as a single string.
- Setting
shell=Truetellssubprocess.run()to execute the command through the system's shell, which is what makes piping possible. - This approach is powerful, but use it with caution. Executing commands with
shell=Truecan introduce security vulnerabilities if you're using untrusted external input.
Setting environment variables for commands
import subprocess
import os
env = os.environ.copy()
env['CUSTOM_VAR'] = 'Hello World'
result = subprocess.run('echo $CUSTOM_VAR', shell=True, env=env,
capture_output=True, text=True)
print(result.stdout.strip())--OUTPUT--Hello World
You can run commands with custom environment variables, which is perfect for setting temporary values without altering your main script's environment. The subprocess.run() function handles this with its env parameter, which accepts a dictionary of variables.
- It's best practice to start by making a copy of the current environment with
os.environ.copy(). - You can then add or modify variables in that copy, such as setting
CUSTOM_VAR. - When you pass this dictionary using
env=env, your command runs in that isolated environment.
Move faster with Replit
Replit is an AI-powered development platform that comes with all Python dependencies pre-installed, so you can skip setup and start coding instantly. This environment lets you move from learning individual techniques to building complete applications with Agent 4, which handles everything from writing code and connecting to APIs to deployment, all from a simple description.
Instead of piecing together commands like os.system() or subprocess.run(), you can describe the final tool you need. Agent 4 can take your idea and build a working product, such as:
- A system resource monitor that uses commands like
psandtopto track and log process activity. - An automated log analyzer that pipes
grepandwctogether to quickly count and report on specific error patterns. - A simple backup utility that runs
rsyncto synchronize files and uses the exit code to verify completion.
Simply describe your app, and Replit will write the code, test it, and fix issues automatically, all within your browser.
Common errors and challenges
Even with the right tools, you might run into a few common roadblocks when executing Linux commands in Python.
- When you try to run a command that doesn't exist, Python's
subprocessmodule raises aFileNotFoundError. This is different from a command failing, as it means the system couldn't even find the program to execute. You can catch this specific error using atry...exceptblock to handle it gracefully, perhaps by informing the user or logging the issue. - Commands don't just produce output; they can also generate errors. These messages are sent to a separate stream called standard error (
stderr), while normal output goes to standard output (stdout). To debug effectively, you can capture this error stream withsubprocess.run(). The resulting object has astderrattribute that holds any error messages, giving you valuable insight into what went wrong. - Using
shell=Trueis convenient for shell features like pipes, but it opens the door to security risks called shell injection. If your script incorporates user input into a command, a malicious user could inject additional commands. The safest practice is to avoidshell=Trueand instead pass your command and its arguments as a list of strings tosubprocess.run(). This method bypasses the shell, preventing unintended commands from running.
Handling command not found errors with subprocess
When you try to run a command that isn't installed or can't be found in your system's PATH, Python's subprocess module raises a FileNotFoundError. This isn't a command failure—it means the program itself is missing. See how this plays out in the code below.
import subprocess
try:
subprocess.run(['ffmpeg', '-i', 'video.mp4', 'audio.mp3'])
print("Conversion completed successfully")
except Exception as e:
print(f"An error occurred: {e}")
The generic except Exception block catches the error that occurs because ffmpeg isn't installed. The code below refines this by specifically targeting the FileNotFoundError to provide a more precise and helpful message.
import subprocess
import shutil
# Check if command exists before attempting to run it
if shutil.which('ffmpeg'):
subprocess.run(['ffmpeg', '-i', 'video.mp4', 'audio.mp3'])
print("Conversion completed successfully")
else:
print("Error: ffmpeg command not found. Please install it first.")
A better way to handle missing commands is to check for them proactively. The shutil.which() function can verify if a program like ffmpeg is installed and available in the system's PATH before you try to execute it. This approach allows you to give a specific, helpful error message instead of just reacting to a crash. It's a robust way to handle dependencies on external tools that might not always be present on a user's system.
Capturing and debugging error output
When a command fails, the exit code confirms the error but doesn't explain why. Error messages are sent to a separate stream called stderr, which isn't captured by default, leaving you to guess the root cause of the failure.
The code below demonstrates this issue. Notice how the ls command fails, but you only see the exit code, not the actual error message from the system.
import subprocess
result = subprocess.run(['ls', '/nonexistent/directory'])
# We can't see the error message with this approach
print(f"Command completed with exit code: {result.returncode}")
The subprocess.run() function doesn't capture the error stream by default, so the reason for the failure remains hidden. See how a small adjustment to the function call reveals the specific error message from the system.
import subprocess
result = subprocess.run(['ls', '/nonexistent/directory'],
capture_output=True, text=True)
print(f"Exit code: {result.returncode}")
print(f"Error message: {result.stderr}")
To see why a command failed, you need to capture its error messages. A non-zero exit code confirms a failure but doesn't explain it. By adding capture_output=True and text=True to your subprocess.run() call, you tell Python to grab the standard error stream. The specific error message is then available in the stderr attribute of the result object, making it much easier to debug what went wrong.
Avoiding shell injection vulnerabilities with subprocess.run()
When you use shell=True, the entire command string is passed directly to the system's shell for interpretation. If part of that string comes from user input, you're creating a shell injection vulnerability. The code below demonstrates this dangerous scenario.
import subprocess
filename = input("Enter filename to read: ")
# Vulnerable to injection if user enters something like "; rm -rf *"
subprocess.run(f"cat {filename}", shell=True)
The f-string combines user input with the command before the shell sees it. A malicious entry can therefore include extra commands. The code below shows how to pass arguments safely, preventing this security risk.
import subprocess
filename = input("Enter filename to read: ")
# Safe from injection - arguments passed as list
subprocess.run(["cat", filename])
The safest way to run commands with user input is to pass the command and its arguments as a list of strings, like ["cat", filename]. This approach avoids the shell entirely. Python passes the filename directly to the cat command as a single argument, preventing it from being interpreted as a separate, potentially malicious command. Always use this list-based method when your command includes any external input to prevent shell injection vulnerabilities.
Real-world applications
Now that you can run commands safely and debug errors, you can build powerful scripts to automate real-world tasks.
Using subprocess for automated log analysis
You can use the subprocess module to run powerful Linux utilities like grep, allowing you to quickly search and analyze log files directly from your Python script.
import subprocess
# Count occurrences of "ERROR" in a log file
grep_cmd = subprocess.run(
['grep', '-c', 'ERROR', '/var/log/syslog'],
capture_output=True, text=True
)
error_count = grep_cmd.stdout.strip()
print(f"Found {error_count} errors in system log")
This script uses subprocess.run() to execute the Linux grep command, showing how you can integrate powerful shell utilities directly into your Python code.
- The command is passed as a list, telling
grepto count (-c) all lines containing "ERROR" in the system log. - Because
capture_output=True, the function saves the command's output to thestdoutattribute of the result object. - Finally,
.strip()cleans up this output string by removing any extra whitespace before the result is printed.
Automating system updates with subprocess
You can also use the subprocess module to create scripts that handle routine system maintenance, such as keeping your software packages up to date.
import subprocess
def update_system():
# Update package lists
print("Updating package lists...")
subprocess.run(['sudo', 'apt', 'update'], check=True)
# Get list of upgradable packages
upgradable = subprocess.run(
['apt', 'list', '--upgradable'],
capture_output=True, text=True
)
# Count upgradable packages
count = len(upgradable.stdout.splitlines()) - 1 # Subtract header line
print(f"Found {count} packages that can be upgraded")
update_system()
This script automates system maintenance by running common apt commands, showing how you can chain together different operations for a complete workflow.
- The function first executes
sudo apt updateto refresh the package index. Thecheck=Trueparameter is a safeguard that stops the script if this initial step fails. - It then runs
apt list --upgradableand captures the text output to determine which packages have pending updates. - Finally, it calculates the number of upgradable packages by counting the lines of output, subtracting one to ignore the list's header.
Get started with Replit
Turn what you've learned into a real tool. Give Replit Agent a prompt like “build a dashboard that tracks CPU usage with top” or “create a script that scans logs for errors with grep”.
Replit Agent will write the code, test for errors, and deploy your application directly from your description. Start building with Replit and turn your concept into a functional app in minutes.
Create and deploy websites, automations, internal tools, data pipelines and more in any programming language without setup, downloads or extra tools. All in a single cloud workspace with AI built in.
Create and deploy websites, automations, internal tools, data pipelines and more in any programming language without setup, downloads or extra tools. All in a single cloud workspace with AI built in.



.png)