Avatar of Mahmoud AbdelwahabMahmoud Abdelwahab

Serverless functions vs containers: CI/CD, database connections, cron jobs, and long-running tasks

Serverless platforms and container platforms both offer push-to-deploy workflows, managed infrastructure, and automatic scaling. The underlying execution models differ, which affects how you solve common problems: connecting to databases, running scheduled jobs, handling long-running tasks, and managing deployments.

This guide covers five problems that come up frequently when deploying applications. For each, we explain how serverless platforms handle it and how Railway's container-based approach handles it, so you can choose what fits your situation.

Both serverless and container platforms support GitHub integration with automatic deployments. The setup is similar; the differences are in what gets deployed and how it runs.

When you deploy your application on Serverless platforms (e.g Cloudflare, Vercel), your application framework is detected and the build is configured automatically.

At build time, application code is parsed and translated into the necessary infrastructure components. Server-side code is then deployed as serverless functions. In the case of Cloudflare you have Workers, while for Vercel you have serverless functions powered by AWS under the hood.

Each deployment creates a new version of your functions. There’s also support for creating isolated environments for every pull request (preview environments/preview deployments), which makes testing and collaboration easier.

Railway also enables you to connect a GitHub repo and to deploy every time you push new code. The difference is you deploy a persistent service rather than individual functions.

Railway uses Railpack, a custom builder that inspects your repository and generates a build plan without configuration. For a TypeScript project, it identifies package.json, installs dependencies, runs the build script, and starts your application.

Whether you deploy a Go API, a Python worker, a Rust binary, or a Node.js application, the platform handles detection automatically. You can also provide your own Dockerfile for full control over the build process, something often missing from serverless platforms.

GitHub push → Railpack build → Deploy → Live

Pull request environments (also known as preview environments or preview deployments) work similarly to serverless preview deployments. When enabled, Railway creates a complete environment for each PR that includes:

  • All services deployed from the PR branch
  • Databases and dependencies provisioned fresh
  • Unique URLs for each exposed service
  • Isolated private networking (services in one PR environment cannot communicate with services in another)

The environment is deleted automatically when the PR is merged or closed. There is no manual teardown step, no orphaned infrastructure.

Serverless fits well when:

  • Your application follows a request-response pattern with short-lived operations
  • Traffic is highly variable with periods of complete inactivity
  • You're building with frameworks that have built-in serverless support (Next.js, Nuxt, SvelteKit)
  • Your functions are stateless and don't need to share memory across requests

Serverless limitations to consider:

  • Execution time limits. Most platforms cap function execution between 30 seconds and 15 minutes. Long-running tasks require chunking or orchestration.
  • Memory limits. Functions typically cap at 4-10 GB of memory depending on the platform.
  • Cold starts. After periods of inactivity, the first request incurs initialization latency. This can range from 100ms to several seconds depending on runtime and bundle size.
  • No persistent connections. WebSockets, server-sent events, and long-polling don't work because connections terminate when the function ends.
  • Stateless execution. In-memory state doesn't persist reliably across requests. Global variables may reset unpredictably as instances scale up and down.
  • Bundle size constraints. Large dependencies increase cold start times and may exceed platform limits.

Containers fit well when:

  • Your application needs persistent connections (WebSockets, real-time updates, live dashboards)
  • You're running long-running tasks: data processing, media transcoding, ML inference, report generation
  • You need consistent latency without cold starts
  • Your application maintains in-memory state (caches, connection pools, session data)
  • You're deploying traditional servers (Express, Fastify, Hono, Django, Rails, etc.)
  • You need to run background workers, schedulers, or queue processors alongside your API
  • Your workload requires more memory or CPU than serverless platforms allow

If you’re reaching for serverless for usage-based pricing, Railway follows the same model where you’re only billed for what you use.

Secure database connections involve two layers: keeping credentials out of source control and keeping database traffic off the public internet.

The approach is the same whether you're using serverless or containers: store credentials as environment variables and read them at runtime.

import { Pool } from "pg";

const pool = new Pool({
  connectionString: process.env.DATABASE_URL,
});

export async function query(text: string, params?: any[]) {
  const result = await pool.query(text, params);
  return result.rows;
}

Every deployment platform provides a way to store secrets. You set DATABASE_URL in your platform's dashboard or configuration file, and your application reads it from the environment.

The key principles:

  • Never commit credentials to source control. Use .env files locally (and add them to .gitignore), but store production credentials in your platform's secrets management.
  • Use different credentials per environment. Your development, staging, and production databases should have separate credentials. Most platforms let you scope variables to specific environments.
  • Rotate credentials without code changes. When credentials are read from the environment, rotating them means updating the platform configuration, not deploying new code.

Railway takes this further with reference variables. rather than copying connection strings between services, you reference the source directly:

DATABASE_URL=${{Postgres.DATABASE_URL}}

The reference resolves at runtime. If you rotate credentials, change database instances, or rename services, the updated value propagates automatically to all services that reference it. This eliminates manual synchronization when your infrastructure changes.

Environment variables protect credentials, but your database traffic still needs a secure path. If your database accepts connections from the public internet, it's exposed to scanning, brute force attempts, and potential misconfiguration.

Private networking solves this by keeping database traffic entirely internal. Services communicate over an isolated network that never touches the public internet.

On serverless platforms, private networking typically requires additional configuration. You may need to set up VPC peering between your function's execution environment and your database provider, configure security groups and subnet routing, or use vendor-specific bindings.

On Railway, private networking works automatically. Services in the same project communicate over a private network using internal hostnames under railway.internal. The database is never exposed publicly unless you explicitly create a TCP proxy for external access (useful for connecting from local development or database GUIs).

┌─────────────────────────────────────────┐
│           Railway Project               │
│                                         │
│   ┌─────────┐       ┌──────────────┐    │
│   │   API   │──────▶│   Postgres   │    │
│   └─────────┘       └──────────────┘    │
│        │         private network        │
└────────┼────────────────────────────────┘
         │ public domain
         ▼
    external traffic

Your API receives a public domain for external traffic, while database connections stay internal. No VPC configuration, no security groups, no network setup. The combination of environment-based credentials and private networking keeps both the credentials and the traffic secure.

Scheduled jobs on serverless platforms face constraints that don't exist on traditional servers. Understanding these helps you decide whether to work around them or choose a different approach.

  • Execution time limits. Serverless platforms impose maximum execution times, typically ranging from 30 seconds to 15 minutes depending on the platform and plan. If your job exceeds the limit, it terminates mid-execution. This creates partial writes, data integrity issues, and the need for checkpointing logic.

For jobs that exceed these limits, you need to break work into chunks:

// Process in batches, track progress in database
export async function handler() {
  const lastProcessed = await getCheckpoint();
  const batch = await getNextBatch(lastProcessed, 1000);
  
  for (const item of batch) {
    await processItem(item);
    await updateCheckpoint(item.id);
  }
  
  // If more work remains, trigger another invocation
  if (batch.length === 1000) {
    await triggerNextBatch();
  }
}

This adds complexity. You're now managing checkpoints, chaining invocations, and handling partial failures across multiple runs.

  • Cold starts on scheduled runs. If your cron runs infrequently (daily, weekly), each run likely hits a cold start. A job scheduled for midnight might actually start executing at 12:00:03 or later depending on runtime initialization.

Container platforms run cron jobs as normal services triggered on a schedule. There is no timeout ceiling. A job runs until it completes, fails, or you stop it. Also, since you’re working with a long running server, you don’t have cold starts

On Railway, cron configuration uses standard cron syntax. All schedules are evaluated in UTC. The minimum interval between runs is five minutes. Railway starts your service at the scheduled time, runs the start command, and expects the process to complete and exit.

Only one execution may be active at a time. If the previous run is still active at the next trigger time, the new run is skipped. This means your job must actually exit when it's done. Most skipped executions trace back to unclosed database connections, pending promises, or background work that was not awaited:

import { Client } from "pg";

async function runJob() {
  const client = new Client({ connectionString: process.env.DATABASE_URL });

  try {
    await client.connect();
    await processWork(client);
  } finally {
    await client.end(); // Critical: close the connection so the process can exit
  }
}

runJob();

Railway does not automatically retry failed runs. You control retry behavior in your code rather than working around platform-imposed retries.

If you’re reaching for cron jobs on serverless because of only paying for what you use, Railway will only charge you for the time your cron jobs run.

Both serverless and container platforms handle rollbacks similarly: every deployment creates an immutable snapshot, and you can restore any previous version with one click. The platform retains your deployment history, so rolling back doesn't require rebuilding or redeploying from source.

Deployment history:

  #47  ← current (live)
  #46
  #45  ← rollback target
  #44
  ...

Your code repository is the source of truth for versioning. Platforms deploy what you push. If you need to track which version is running, use git tags, commit SHAs, or inject build metadata into your application.

On Railway, you can add health checks that prevent bad deployments from receiving traffic.

Railway waits for a successful health check before routing traffic to a new deployment. If the check fails, the deployment is marked as failed and traffic continues to the previous version.

The real complexity in rollbacks isn't the application code. It's the database.

If a deployment runs a migration that alters a table schema, rolling back the code doesn't reverse the migration. Your previous code version may not be compatible with the new schema.

Strategies that work on any platform:

  • Write backward-compatible migrations. Add columns rather than rename them. Keep old columns until all code versions are updated.
    • -- Good: backward compatible
      ALTER TABLE users ADD COLUMN email_verified boolean DEFAULT false;
      
      -- Risky: breaks old code immediately
      ALTER TABLE users RENAME COLUMN email TO email_address;
  • Test rollback scenarios in preview environments. Run migrations in an isolated environment, deploy the new code, then deliberately roll back to verify compatibility before merging to production.

Both approaches handle versioning well. The choice depends more on your overall platform choice than on rollback capabilities specifically.

Some workloads don't fit the serverless execution model: large file processing, video encoding, ML inference, batch data exports. These tasks need more time, more memory, or both.

This is due to the nature of serverless platforms where you have resource constraints (execution time, amount of CPU/RAM, etc.). There are workarounds

  • Break work into chunks:
// Process file in chunks across multiple invocations
export async function handler(event: { fileKey: string; offset: number }) {
  const { fileKey, offset } = event;
  const chunkSize = 10_000;

  const chunk = await readChunk(fileKey, offset, chunkSize);
  await processChunk(chunk);

  if (chunk.length === chunkSize) {
    // More data remains, trigger next chunk
    await invokeNext({ fileKey, offset: offset + chunkSize });
  }
}
  • Use streaming where possible:
// Stream large file instead of loading into memory
import { createGunzip } from "zlib";
import { pipeline } from "stream/promises";

export async function handler() {
  const response = await fetchLargeFile();

  await pipeline(
    response.body,
    createGunzip(),
    async function* (source) {
      for await (const chunk of source) {
        yield processChunk(chunk);
      }
    }
  );
}
  • Offload to container-based batch services. Most cloud providers offer container or batch job services without the same time limits. This adds infrastructure complexity but removes the execution constraints.

Container platforms let you configure memory and CPU per service without artificial execution time limits. For example, on Railway, services can scale up to 32 vCPUs and 32 GB RAM and Tasks can run until completion. If you need more resources, you can scale horizontally by deploying replicas

// Process entire file in one go
import { processImage } from "./image-processing";

async function handleUpload(fileUrl: string) {
  // Download, process, upload - no chunking needed
  const input = await downloadFile(fileUrl);
  const output = await processImage(input, {
    resize: { width: 1920, height: 1080 },
    format: "webp",
    quality: 85,
  });
  await uploadResult(output);
}

This model supports workloads that serverless platforms cannot handle well:

  • Data processing: ETL jobs, large file imports/exports, analytics aggregation
  • Media processing: Video/audio transcoding, image resizing, thumbnail generation
  • Report generation: Large PDFs, financial reports, bulk exports
  • Infrastructure tasks: Backups, CI/CD steps, provisioning workflows
  • Billing and finance: Usage calculation, invoice generation, payment retries
  • User operations: Account deletion, data merging, stat recalculations

Railway also supports persistent volumes for workloads that need durable on-disk storage. This is essential for embedded databases, local caching, indexing, or search engines.

For background processing, you can deploy a queue with something like BullMQ + Redis

// worker.ts
import { Worker } from "bullmq";
import { connection } from "./redis";

const worker = new Worker(
  "image-processing",
  async (job) => {
    const { fileUrl } = job.data;
    await processImage(fileUrl);
  },
  { connection, concurrency: 5 }
);
// api.ts - queue jobs instead of processing synchronously
import { Queue } from "bullmq";
import { connection } from "./redis";

const queue = new Queue("image-processing", { connection });

app.post("/upload", async (req, res) => {
  const { fileUrl } = req.body;
  await queue.add("process", { fileUrl });
  res.json({ status: "queued" });
});

This pattern gives you immediate response to the client, retry semantics with backoff, concurrency control, and no time limits on processing.

Serverless and container platforms solve similar problems with different tradeoffs. Serverless excels at scale-to-zero economics and per-function scaling. Containers excel at predictable latency, persistent connections, and long-running tasks.

For many applications, the choice comes down to execution model: do your workloads fit the serverless constraints, or would you spend more time working around them than building features?

If you're evaluating options, Railway offers a container-based platform with the deployment simplicity of serverless: GitHub integration, automatic builds, preview environments, and managed databases. The execution model is different, but the developer experience is comparable.