Manually Optimize Deployments on Railway
Railway does a number of built in optimizations to make the average build on Railway as fast as possible. However, there are times where you would want to deploy an image with as little as possible to speed up your deployments.
In this article, we will also explore how to optimize our Dockerfile for a smaller image size, and finally, we’ll use GitHub Actions to automate our deployments on Railway.
- Basic application (All templates will be provided).
- Basic understanding of Docker and Dockerfiles.
- Familiarity with GitHub and GitHub Actions.
- Basic understanding of Railway’s deployment process.
- Deploying with Nixpacks
- Deploying with a Custom Dockerfile
- Tips for Optimizing Dockerfiles for Smaller Image Size
- Deploying Directly from a Pre-built Image
- Setting Up GitHub Actions for CI/CD
In this section, we'll deploy a simple blog application API built with FastAPI using Railway’s default builder, Nixpacks, and measure the build and deploy times. You can find the template for the app here.
In our blog application template you will notice a file called railway.json
here’s what the file looks like:
{
"$schema": "https://railway.app/railway.schema.json",
"build": {
"builder": "NIXPACKS"
},
"deploy": {
"startCommand": "uvicorn main:app --host 0.0.0.0 --port $PORT"
}
}
Let’s break down the above configurations:
$schema
: It ensures that the file follows Railway’s configuration standards.builder
: We specified Nixpacks as the build system.startCommand
: We defined the command to start the app. Here, it uses Uvicorn to run the FastAPI app on the port provided by Railway.
To deploy the template:
- Fork the repository: Fork the template from GitHub.
- Link to Railway: Create a new Railway project and connect it to your forked GitHub repo.
- Hit Deploy: Click Deploy, and Railway will automatically use Nixpacks to build and package the app.
During deployment, Nixpacks managed everything, from installing dependencies to creating a deployable image. The process took 1 minute and 27 seconds—55 seconds for the build and 32 seconds for deployment. While this is manageable for small apps, larger applications may take more time. In the next sections, we’ll explore alternatives like Dockerfiles and pre-built images.
Nixpacks deployment time in Railway
While Nixpacks simplifies deployment, using a custom Dockerfile offers more control over your build process. Dockerfiles allow you to optimize images for smaller sizes, faster builds, and improved efficiency—especially for complex apps that require custom configurations or dependencies.
- Optimize for Speed: Minimize unnecessary steps and dependencies to improve build times.
- Control the Environment: Specify the exact OS, runtime, and libraries your app needs.
- Ensure Consistency: Use the same Dockerfile locally, in CI, and on production to avoid "it works on my machine" issues.
In our blog application template, you will notice a file called Dockerfile
. Here’s what the file looks like:
# Use an official Python runtime as a parent image
FROM python:3.11-slim
# Set the working directory in the container
WORKDIR /app
# Copy the requirements.txt file to the working directory
COPY requirements.txt .
# Install the Python dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Copy the rest of the application code to the working directory
COPY . .
# Expose the port FastAPI will run on
EXPOSE $PORT
# Define the command to run your FastAPI application
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "$PORT"]
Let’s break down the above configurations:
- FROM python:3.11-slim: Uses a lightweight Python 3.11 image to minimize the size of the container.
- WORKDIR /app: Defines the directory in the container where all operations will be performed.
- COPY requirements.txt: Copies the
requirements.txt
file into the container and installs all the required Python dependencies. - RUN pip install --no-cache-dir -r requirements.txt: Installs the Python dependencies, using the
-no-cache-dir
flag to prevent caching and keep the image size small. - COPY . .: Copies the rest of the application code to the container.
- EXPOSE $PORT: Exposes the dynamic port on which the FastAPI app will run.
- CMD: Defines the command to run the FastAPI app using Uvicorn, binding it to
0.0.0.0
and using the Railway-provided port.
To deploy the template:
- Fork the repository: Fork the template from GitHub.
- Link to Railway: Create a new Railway project and connect it to your forked GitHub repo.
- Hit Deploy: Click Deploy, and Railway will automatically use Nixpacks to build and package the app.
During the deployment of the template, our custom Dockerfile handled everything, and the entire process from initialization to deployment took a total of 15 seconds. The build took 11 seconds, and the deployment took 4 seconds. This is significantly faster and a huge improvement compared to the Nixpacks builder.
Dockerfile deployment time in Railway
When it comes to Dockerfiles, a little optimization can go a long way. Smaller image sizes not only make your deployments faster but also minimize bandwidth and storage costs.
In this section, we will discuss some effective strategies to optimize your Dockerfile for a leaner build.
- Use a Smaller Base Image: Choosing a minimal base image, such as
alpine
orslim
, can significantly reduce the overall size of your Docker image. - Multi-Stage Builds in Docker: Multi-stage builds allow you to separate the build environment from the runtime environment, ensuring that only the necessary artifacts are included in the final image, which helps keep it lightweight.
- Minimize Layers: Reducing the number of layers in your image can lead to a smaller size. Combine multiple commands into single
RUN
instructions whenever possible. - Use
.dockerignore
Files: Similar to.gitignore
,.dockerignore
files prevent unnecessary files from being included in the build context, helping to keep the image size down and improving build performance.
Using pre-built images is one of the fastest ways to deploy your application, as it completely bypasses the need for building during the deployment process. Instead of building the application on each deployment, you can pre-build the Docker image and store it in Docker Hub. Railway can then pull the image directly from there, resulting in significantly faster deployment times.
- Faster Deployments: By skipping the build process, deployments are much faster, often taking just a few seconds to pull the pre-built image.
- Consistency: Pre-built images ensure that you are deploying the exact same image across different environments, eliminating discrepancies that may occur due to different build environments.
- Simplified CI/CD: Since the image is pre-built, you can streamline your CI/CD pipelines to focus on testing and deployment rather than building.
To create your own pre-built image, follow these steps:
1. Clone Repository: First, clone the template repository to your local machine.
2. Build and Push Image: Navigate to the directory of the cloned template. Then, build and push the image using Docker.
Once the image is successfully pushed, it will be available in your Docker Hub account, and you can go ahead to deploy the image on Railway.
To deploy the image:
- Link to Railway: Create a new Railway project and connect it to your image on Docker Hub. Railway docs for reference.
- Hit Deploy: Click Deploy, and Railway will now provision a new service for your project based on the specified Docker image.
During the deployment of the template, the entire process, from initialization to deployment, took a total of 6 seconds. This is significantly faster and a huge improvement compared to using the Nixpacks and Dockerfile builders.
Image deployment time
In previous sections, we compared using Nixpacks, a custom Dockerfile, and pre-built images for deploying on Railway. The fastest method was using pre-built images. In this section, I will show you how to automate the entire process in our CI/CD pipeline using GitHub and GitHub Actions.
Here's a breakdown of how our CI/CD pipeline will work, we will be creating two workflows:
- Pre-built image builder and publisher workflow
- Railway image deployment workflow
Let's walk through the setup of these workflows.
We will create a GitHub Actions workflow file named docker-image.yml
inside the .github/workflows/
directory. This file will handle the building of the Docker image and pushing it to the container registry.
Here’s what your workflow should look like:
name: Build and Push Docker Image
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
# Checkout the code
- name: Checkout code
uses: actions/checkout@v3
# Log in to the Docker registry
- name: Log in to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
# Build the Docker image
- name: Build the Docker image
run: docker build -t railblog-docker:${{ github.sha }} .
# Tag the Docker image
- name: Tag the Docker image
run: docker tag railblog-docker:${{ github.sha }} ${{ secrets.DOCKER_USERNAME }}/railblog-docker:latest
# Push the Docker image to Docker Hub
- name: Push the Docker image
run: docker push ${{ secrets.DOCKER_USERNAME }}/railblog-docker:latest
Let’s break down the workflow:
- The workflow is triggered by a push to the
main
branch. - We use the
actions/checkout
action to pull the latest code. - We log in to Docker Hub using GitHub secrets to securely store credentials.
- The image is built, tagged with both the commit SHA and
latest
, and then pushed to Docker Hub.
DOCKER_USERNAME
and DOCKER_PASSWORD
to GitHub secrets.We will create a GitHub Actions workflow file named railway-deploy.yml
inside the .github/workflows/
directory. This file will handle the deployment of our pre-built images.
Here’s what your workflow should look like:
name: Deploy to Railway
on:
workflow_run:
workflows: ["Build and Push Docker Image"]
types:
- completed
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Set up Node.js
uses: actions/setup-node@v3
with:
node-version: '20'
- name: Install Railway CLI
run: npm install -g @railway/cli
- name: Link Railway project and deploy
env:
RAILWAY_API_TOKEN: ${{ secrets.RAILWAY_API_TOKEN }}
RAILWAY_SERVICE_ID: ${{ secrets.RAILWAY_SERVICE_ID }}
RAILWAY_PROJECT_ID: ${{ secrets.RAILWAY_PROJECT_ID }}
RAILWAY_ENVIRONMENT_ID: ${{ secrets.RAILWAY_ENVIRONMENT_ID }}
run: |
railway link --service=$RAILWAY_SERVICE_ID --project-id=$RAILWAY_PROJECT_ID --environment=$RAILWAY_ENVIRONMENT_ID
railway redeploy --yes
Let’s break down the workflow:
- Trigger: The workflow is triggered by the completion of the Build and Push Docker Image workflow.
- Checkout Code: We use the
actions/checkout
action to pull the latest code from the repository. - Set up Node.js: We set up Node.js version 20 to ensure our application runs in the correct environment.
- Install Railway CLI: The Railway CLI is installed globally, which is necessary for deployment tasks.
- Link Railway Project: Using environment variables stored securely in GitHub Secrets, we link the local repository to the specified Railway project and environment.
- Deploy: The
railway redeploy --yes
command is executed to deploy the latest changes to Railway.
RAILWAY_API_TOKEN
, RAILWAY_SERVICE_ID
, RAILWAY_PROJECT_ID
and RAILWAY_ENVIRONMENT_ID
to GitHub secrets.Here’s what your GitHub secrets should look like:
View of GitHub secrets
Push changes to the main
branch to trigger the GitHub Actions workflow, then monitor the Actions tab for successful Docker image builds and Railway deployments.
Here’s the result of the pre-built image builder and publisher Workflow:
Workflow run time using pre-built image
Here’s the result of the Railway image deployment Workflow:
Workflow run time using Railway image deployment
In this article, we explored effective strategies to speed up deployments on Railway. We covered how to utilize Nixpacks, create custom Dockerfiles, and leverage pre-built images for quicker deployment times. Additionally, we set up GitHub Actions for CI/CD, which ensures a streamlined workflow. By implementing these techniques, you can enhance your development experience, minimize wait times, and efficiently manage complex applications.