How to run a bash script in Python

Learn how to run bash scripts in Python. Explore different methods, tips, real-world applications, and common error debugging.

How to run a bash script in Python
Published on: 
Tue
Apr 21, 2026
Updated on: 
Wed
Apr 22, 2026
The Replit Team

You can combine Python's high-level logic with powerful shell commands. Run Bash scripts directly from your code for key automation and system administration tasks.

You'll find several techniques to execute scripts, complete with practical tips and real-world applications. We also provide debugging advice to help you master this powerful integration.

Using subprocess.run() for basic execution

import subprocess
result = subprocess.run(["bash", "hello.sh"])
print(f"Return code: {result.returncode}")--OUTPUT--Hello, World!
Return code: 0

The subprocess.run() function is the modern, recommended way to execute external commands. It's a blocking call, so your Python script will wait for the shell script to finish before continuing. This function provides a straightforward approach to running scripts and handling their output.

  • The command and its arguments are passed as a list, like ["bash", "hello.sh"]. This is safer than a single string and helps prevent shell injection attacks.
  • The function returns a CompletedProcess object. You can inspect its returncode attribute to confirm a successful execution, which is conventionally a 0.

Common methods to execute bash scripts

While subprocess.run() is the modern standard, other functions like os.system() and more specialized subprocess methods offer unique advantages for specific scripting needs.

Using os.system() for simple commands

import os
exit_code = os.system("bash hello.sh")
print(f"Exit status: {exit_code}")--OUTPUT--Hello, World!
Exit status: 0

The os.system() function offers a straightforward way to run shell commands. It passes the command as a single string directly to a subshell for execution. While it's quick for basic tasks, it provides less control and security than the more modern subprocess.run().

  • A key difference is that the script's output prints directly to the console; you can't capture it in a Python variable.
  • The function only returns the command's exit status, not a comprehensive object with more details.

Using subprocess.call() for return codes

import subprocess
return_code = subprocess.call(["bash", "hello.sh"])
print(f"Return code: {return_code}")--OUTPUT--Hello, World!
Return code: 0

Think of subprocess.call() as a more direct version of subprocess.run(). It executes your command and waits for it to finish, but it only returns the script's integer exit code. This makes it a straightforward choice when you only need to know if a command succeeded or failed.

  • Like subprocess.run(), it accepts a list of arguments, which is a secure way to pass commands.
  • Unlike run(), it doesn't capture the script's output; the output prints directly to the console.

Using subprocess.Popen() for more control

import subprocess
process = subprocess.Popen(["bash", "hello.sh"])
process.wait()
print(f"Process completed with code: {process.returncode}")--OUTPUT--Hello, World!
Process completed with code: 0

For maximum flexibility, turn to subprocess.Popen(). This is a non-blocking call, meaning your Python script launches the shell script and continues executing without waiting for it to complete. You'll immediately get a Popen object that represents the running process, giving you more control.

  • You can tell your script to pause and wait for the shell command to finish by calling process.wait().
  • This method is perfect for managing long-running tasks or even running multiple processes in parallel.

Advanced bash script execution techniques

Moving beyond simple execution, you can build more sophisticated integrations by capturing script output, passing custom environment variables, and handling errors with timeouts.

Capturing and processing script output

import subprocess
result = subprocess.run(["bash", "hello.sh"], capture_output=True, text=True)
output_lines = result.stdout.strip().split('\n')
print(f"Number of lines: {len(output_lines)}")
print(f"First line: {output_lines[0]}")--OUTPUT--Number of lines: 1
First line: Hello, World!

To make your script's output useful in Python, you need to capture it. Setting capture_output=True in subprocess.run() redirects the output from the console, letting you work with the data directly in your code instead of just seeing it printed.

  • It's also important to add text=True, which decodes the captured output into a standard Python string.
  • You can then access this string through the result.stdout attribute and process it just like any other text.

Passing environment variables to bash scripts

import subprocess
import os
env = os.environ.copy()
env["NAME"] = "Python"
result = subprocess.run(["bash", "greet.sh"], env=env, capture_output=True, text=True)
print(result.stdout)--OUTPUT--Hello, Python!

You can pass data from your Python script to a shell script using environment variables. This lets you dynamically configure the script's behavior without changing the script file itself. The process is straightforward and gives you precise control over the script's execution environment.

  • First, create a copy of the current environment with os.environ.copy().
  • Next, add or modify variables in the copied dictionary, like setting env["NAME"] = "Python".
  • Finally, pass this modified environment to subprocess.run() using the env parameter.

Implementing timeout and error handling

import subprocess
try:
result = subprocess.run(["bash", "long_script.sh"], timeout=5, check=True)
print("Script completed successfully")
except subprocess.TimeoutExpired:
print("Script took too long to complete")
except subprocess.CalledProcessError:
print("Script returned non-zero exit code")--OUTPUT--Script completed successfully

Robust scripts anticipate problems. You can build in safeguards by using the timeout and check parameters within subprocess.run(). This lets you handle scripts that run too long or fail unexpectedly.

  • The timeout parameter sets a time limit in seconds. If the script exceeds this, a TimeoutExpired exception is raised.
  • Setting check=True tells Python to raise a CalledProcessError if the script returns a non-zero exit code, which typically indicates an error.

Move faster with Replit

Replit is an AI-powered development platform that comes with all Python dependencies pre-installed, so you can skip setup and start coding instantly. This lets you move from piecing together individual techniques, like using subprocess.run(), to building complete applications.

Instead of just combining commands, you can use Agent 4 to turn your idea into a working product. Describe the tool you want to build, and the Agent will handle the code, databases, and deployment. For example, you could create:

  • A system health dashboard that runs a Bash script to check disk usage and memory, then displays the captured output.
  • An automated deployment tool that executes a build script, passing a version number from your Python code as an environment variable.
  • A log file processor that runs a grep command via a shell script to find specific errors and formats the results.

Simply describe your app, and Replit will write the code, test it, and fix issues automatically, all within your browser.

Common errors and challenges

Running Bash scripts from Python can introduce subtle errors; here’s how to handle the most common ones.

Handling shell wildcards correctly with subprocess

Shell features like wildcards (e.g., *) can behave unexpectedly. The subprocess functions don't interpret these special characters by default, so a command like ls *.txt might fail because the shell isn't there to expand the asterisk.

To fix this, you can use the shell=True argument in your function call. This runs your command through a true shell, which correctly processes wildcards. Just be cautious, as this approach can create security vulnerabilities if you're using untrusted input.

Handling 'command not found' errors gracefully

A FileNotFoundError is another frequent hurdle. This error typically means the command you're trying to execute isn't in the system's PATH, so Python can't locate it.

You can manage this gracefully by wrapping your command in a try...except FileNotFoundError block. This lets your script catch the error and respond with a helpful message or fallback logic instead of crashing unexpectedly.

Capturing both stdout and stderr with capture_output=True

When a script fails, it often sends error messages to a separate stream called stderr. If you only capture standard output (stdout), you'll miss this vital debugging information.

Using capture_output=True with subprocess.run() is the solution. It captures both streams, allowing you to inspect them separately:

  • result.stdout contains the script's standard output.
  • result.stderr contains any error messages, giving you a complete picture of what happened.

Handling shell wildcards correctly with subprocess

When you pass a shell wildcard like * to subprocess, it's treated as a literal string, not a pattern to expand. This is because subprocess doesn't use a shell by default, which can lead to empty or incorrect results. Observe this behavior below.

import subprocess
result = subprocess.run(["ls", "*.txt"], capture_output=True, text=True)
print(result.stdout) # Often empty as wildcard isn't expanded

The *.txt argument is treated as a literal filename, not a pattern, because the command bypasses the shell. This causes ls to look for a file named "*.txt". The following example demonstrates the correct approach.

import subprocess
result = subprocess.run("ls *.txt", shell=True, capture_output=True, text=True)
print(result.stdout) # Shows all .txt files

To fix this, add the shell=True argument. This tells subprocess to run your command in an actual shell, which knows how to expand wildcards like * into a list of matching files. Without it, Python just passes "*.txt" as a literal filename, which is why the command fails. This approach is necessary for shell-specific features, but be cautious as it can introduce security risks if you're using untrusted input.

Handling 'command not found' errors gracefully

You'll often encounter a FileNotFoundError when the command you're trying to execute isn't in the system's PATH. Python simply can't find the executable. The code below triggers this error by attempting to run a command that doesn't exist.

import subprocess
result = subprocess.run(["non_existent_command", "--version"])
print("Command executed successfully")

Since non_existent_command isn't a valid program, Python raises a FileNotFoundError and crashes. The following example shows how you can catch this error to prevent your script from halting unexpectedly and provide a helpful message instead.

import subprocess
try:
result = subprocess.run(["non_existent_command", "--version"])
print("Command executed successfully")
except FileNotFoundError:
print("Command not found. Please check if it's installed.")

By wrapping the subprocess.run() call in a try...except FileNotFoundError block, you can gracefully handle cases where a command isn't installed. This structure prevents your script from crashing when it can't find an executable in the system's PATH. Instead, it runs the code in the except block, letting you provide a helpful message or fallback logic. It’s a crucial pattern for writing robust automation scripts that depend on external command-line tools.

Capturing both stdout and stderr with capture_output=True

When a script fails, its error messages are sent to stderr, a separate stream from the standard output, stdout. If you only capture stdout, you'll miss crucial debugging information. The following code demonstrates this common pitfall, showing what happens.

import subprocess
result = subprocess.run(["bash", "error_script.sh"], stdout=subprocess.PIPE, text=True)
print(f"Output: {result.stdout}")

By specifying stdout=subprocess.PIPE, the code only listens to the standard output stream. Any error messages sent to stderr are ignored by the result object, leaving your debugging efforts blind. The next example demonstrates the correct approach.

import subprocess
result = subprocess.run(["bash", "error_script.sh"], capture_output=True, text=True)
print(f"Output: {result.stdout}")
print(f"Errors: {result.stderr}")

Using capture_output=True solves the problem by telling subprocess.run() to grab both output streams. This is essential for debugging, as error messages are often sent to a separate channel from standard output.

  • result.stdout holds the script’s regular output.
  • result.stderr contains any error messages.

This ensures you won't miss critical information when a script fails or behaves unexpectedly.

Real-world applications

Now that you can navigate common pitfalls, you can apply these techniques to build powerful, real-world automation.

Monitoring disk space with subprocess

You can use subprocess to run the df command, a common tool for checking disk usage, and then parse its output directly within your Python script.

import subprocess

result = subprocess.run(["df", "-h", "/"], capture_output=True, text=True)
disk_info = result.stdout.split("\n")[1] # Get the line with disk info
print(f"Disk usage information: {disk_info}")

This code executes the df -h / command to check disk space. The subprocess.run() function runs the command, and two key arguments help you process its output:

  • capture_output=True grabs the output so you can work with it in your script instead of just seeing it on the screen.
  • text=True decodes that output into a standard string.

The command returns multiple lines, so you can use result.stdout.split('\n')[1] to isolate the one line containing the actual disk usage data.

Building a multi-step deployment process

You can automate entire workflows, like a multi-step deployment, by running a sequence of shell commands and checking each one for success before proceeding.

import subprocess

steps = ["git pull origin main", "pip install -r requirements.txt"]
for step in steps:
result = subprocess.run(step, shell=True)
if result.returncode != 0:
print(f"Deployment failed at: {step}")
break
else:
print("Deployment completed successfully!")

This code chains together shell commands to create an automated workflow. It uses a for...else loop, a neat Python feature that's perfect for this kind of task.

  • The script executes each command in the steps list sequentially.
  • If a command fails—indicated by a returncode other than 0—the loop immediately stops with a break.
  • The final else block only runs if the loop completes without any interruptions, signaling that every step succeeded.

Get started with Replit

Now, turn these techniques into a real tool. Describe what you want to build to Replit Agent, like “a dashboard that checks server health with a Bash script” or “an app that automates Git pulls.”

The Agent will then write the code, test for errors, and deploy your app automatically. Start building with Replit.

Get started free

Create and deploy websites, automations, internal tools, data pipelines and more in any programming language without setup, downloads or extra tools. All in a single cloud workspace with AI built in.

Get started free

Create and deploy websites, automations, internal tools, data pipelines and more in any programming language without setup, downloads or extra tools. All in a single cloud workspace with AI built in.