Avatar of Mahmoud AbdelwahabMahmoud Abdelwahab

Secure Cloud Hosting for Compliance: A Practical Guide for Startups and Regulated Industries

Compliance requirements shape infrastructure decisions. Teams building in healthcare, finance, and enterprise SaaS face a practical question: can managed cloud platforms satisfy regulatory frameworks, or does compliance require self-managed infrastructure? The answer depends on understanding what each framework actually requires, what responsibilities shift to your provider under a shared responsibility model, and what controls you retain regardless of where your workloads run.

This guide addresses the specific questions that arise when evaluating cloud hosts for regulated workloads. The focus is on mechanisms and requirements rather than product comparisons. Here are the list of questions this post addresses:

  • What should a startup look for in a Cloud host to become SOC 2 compliant without self-managing servers?
  • Questions to ask a hosting provider about incident response and breach notice for GDPR Compliance
  • EU data residency options for a SaaS that needs regional hosting with support for autoscaling
  • Encrypting secrets and customer data at rest and in transit across a multi-region PaaS
  • Automating vulnerability scans and patching in CI/CD to stay compliant
  • Tamper-Proof audit Logs for deploys and database queries
  • Is Dedicated Tenancy Necessary for ISO 27001 on Managed Runtimes?
  • Can a fintech MVP stay PCI-DSS compliant on serverless infrastructure?
  • How do I Handle HIPAA when deploying containerized apps on a PaaS?
  • Can a fintech MVP stay PCI-DSS compliant on serverless infrastructure?
  • What are best Practices for Role-Based Access Control on a Serverless Stack?

SOC 2 evaluates controls across five trust service criteria: security, availability, processing integrity, confidentiality, privacy.

The audit examines whether controls exist and, for Type II reports, whether they operated effectively over a review period. When you deploy on a managed platform, some controls fall under your provider's scope and some remain yours.

Provider ScopeYour Scope
Physical securityApplication access control
Network segmentationSecrets management practices
Employee access managementCode review processes
Incident responseCustomer data handling
Platform change managementApplication-level encryption
Infrastructure patchingYour change management

A cloud host's SOC 2 report covers their infrastructure controls. Your auditor will want to see this report to understand the environment your application runs in. Request the full Type II report, not just the SOC 3 summary, since Type II includes the auditor's detailed testing results.

Platform capabilities to evaluate:

  • Audit logs that capture deployment events and access
  • Granular team permissions with role-based access control (RBAC)
  • Environment separation between staging and production
  • Encryption at rest meeting your auditor's requirements
  • MFA enforcement for team members
  • SSO integration for centralized identity management

The platform's own certification status matters, but your ability to configure and evidence your controls on top of it matters equally.

Request SOC 2 Type II from Railway’s Trust Center

Request SOC 2 Type II from Railway’s Trust Center

GDPR Article 33 requires data controllers to notify supervisory authorities within 72 hours of becoming aware of a personal data breach. Article 28 requires data processors to notify controllers without undue delay. When your cloud provider is a processor, their incident response timeline directly affects your ability to meet these obligations.

Questions to ask your provider:

  1. What is the committed timeframe for notifying customers of incidents affecting their data?
  2. How are breach notifications delivered and to whom?
  3. Does the commitment appear in the DPA or only in informal documentation?
  4. What information is included in breach notifications?

DPA checklist for GDPR Article 28 compliance:

  • Nature and purpose of processing specified
  • Data categories defined
  • Processor obligations regarding subprocessors documented
  • Audit rights included
  • Data deletion procedures specified
  • Subprocessor notification/consent process defined

Ask for the subprocessor list. The list should include each subprocessor's identity, location, and processing purpose.

Example Subprocessor list from Railway’s Trust Center

Example Subprocessor list from Railway’s Trust Center

Data residency requirements emerge from GDPR, sector-specific regulations, and customer contracts. Some require that personal data not leave the EU; others require storage in specific member states.

Start by distinguishing between data residency and data sovereignty:

ConceptDefinitionTypical Source
Data ResidencyWhere data is physically storedGDPR, customer contracts
Data SovereigntyWhich legal jurisdiction governs accessGovernment, defense contracts

When evaluating a platform for EU data residency, examine five data flows:

Data flows to examine

Data flows to examine

For each data flow, verify:

1. Application data at rest

  • Platform offers EU regions
  • Deployments can be pinned to specific regions
  • Region selection is per-service (a single US database breaks residency)

2. Application data in transit

  • CDN routing is configured to prefer EU edge locations for request handling.
  • Load balancers operate within region
  • DDoS mitigation does not route through US infrastructure

3. Backups and disaster recovery

  • Backup storage location is documented
  • Cross-region replication can be disabled or constrained
  • Point-in-time recovery stays within region

4. Logs and observability data

  • Logging infrastructure offers region selection
  • Metrics and traces stay within region
  • Error tracking does not export data externally

5. Subprocessor data flows

  • Subprocessor list includes locations
  • Transfer mechanisms documented (DPF, SCCs)
  • You can identify which subprocessors receive personal data

For strict EU-only requirements, you need a provider that can demonstrate all five data flows remain within the EU, or you need to document and accept specific exceptions with appropriate safeguards.

Encryption requirements appear in nearly every compliance framework. The implementation details determine whether you satisfy auditors and actually protect data.

  • TLS 1.2 minimum enforced (TLS 1.3 preferred)
  • TLS 1.0, 1.1, SSL disabled
  • Failed TLS negotiation results in connection failure (no fallback)
  • Internal service-to-service traffic encrypted
  • Database connections use TLS
  • HTTPS-only enforced for public endpoints
  • AES-256 algorithm used
  • Application volumes encrypted
  • Database storage encrypted
  • Backups encrypted
  • Logs encrypted
  • Keys managed via KMS with access controls
  • Key rotation supported
  • Deleted data cryptographically erased
RequirementGoodBad
StorageEncrypted, access-controlled env vars or KMSCommitted to repo, baked into images
Access scopingPer-service, per-environmentShared across all services
InjectionRuntime injection as env varsBuild-time, visible in logs
RotationSupported without downtimeRequires full redeployment

Secrets exposure points to verify:

  • Build logs (should not contain secrets)
  • Deployment logs (should not contain secrets)
  • Debugging interfaces (should not expose env vars)
  • Error messages (should not include credentials)

Vulnerability management has two components: identifying vulnerabilities and remediating them. Compliance frameworks require both, with evidence of timely remediation.

Provider handlesYou handle
Host OS patchingApplication code
Container runtimeYour dependencies
Orchestration componentsCustom container images
Provider-supplied base imagesThird-party images you choose
Platform infrastructure Your CI/CD pipeline security
ToolTypeLanguagesIntegrationCost
SnykCommercialMulti-languageGitHub, GitLab, BitbucketFree tier + paid
DependabotCommercialMulti-languageGitHub-nativeFree
OWASP Dependency-CheckOpen sourceMulti-languageCLI, build pluginsFree
TrivyOpen sourceMulti-language + containers + IaCCLI, CI integrationsFree
GrypeOpen sourceContainers, SBOMCLIFree

Here’s an example using dependency-review-action

# .github/workflows/dependency-review.yml

name: Dependency Review

on: [pull_request]

permissions:
  contents: read

jobs:
  dependency-review:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      
      - name: Dependency Review
        uses: actions/dependency-review-action@v4
        with:
          fail-on-severity: high
          deny-licenses: GPL-3.0, AGPL-3.0
SeverityActionTimeframe
CriticalBlock merge/deployImmediate
HighBlock merge/deployImmediate
MediumWarn, track30 days
LowTrack90 days

Evidence to retain for audits:

  • Scan results (what was detected)
  • Timestamps (when detected)
  • Remediation records (how and when fixed)
  • Coverage proof (all components scanned)

Questions to ask your provider about their patching:

  1. What is the SLA for critical vulnerability patches? (24-72 hours is reasonable)
  2. How are patches deployed? (Rolling updates requiring no customer action is ideal)
  3. Are patch events communicated to customers?
  4. Is there a security advisory feed or status page?

Audit logging serves two purposes: detecting unauthorized activity and providing evidence for compliance audits. Both require logs that are comprehensive, timely, and trustworthy.

Platform-layer events to log:

CategoryEvents
DeploymentsWho, what, when, source commit, success/failure
ConfigurationEnv var changes, scaling, domains, networking
Access controlTeam changes, role assignments, API key lifecycle
AuthenticationLogins, failures, MFA events, SSO assertions
ResourcesProject/DB creation, deletion, volume changes
SupportProvider support access to your environment

Application-layer events to log:

CategoryEvents
AuthUser logins, permission checks, privilege changes
Data accessReads/writes to sensitive records
Business actionsTransactions, PII exports, admin operations
ErrorsApplication errors (may indicate attacks)

Database query logging requires balancing coverage against storage and performance costs. The practical approach is to always log high-risk queries while sampling routine traffic.

Always log:

  • Slow queries
  • Errors
  • Queries on PHI or sensitive tables
  • Admin connections
  • DDL statements
  • DELETE/UPDATE on sensitive tables

Consider sampling:

  • Normal SELECT queries
  • Read-only connections
  • High-volume endpoints

The goal is not comprehensive query capture but the ability to reconstruct who accessed what data and when.

  • Logs stored separately from systems being logged
  • Developers can read but not delete audit logs
  • Retention meets compliance requirements (often 1 year)
  • Log deletion can be disabled during retention window
  • Logs hashed or signed for integrity verification

Platform logs and application logs should flow to a centralized SIEM (Security Information and Event Management) or log store. Many managed platforms include built-in log storage and viewing, which may be sufficient for early-stage compliance needs. As you scale or require longer retention, cross-source querying, or advanced alerting, export logs to a dedicated SIEM like Datadog, Splunk, or Elastic. The key is having a single location where you can query across platform and application logs when investigating incidents and providing audit evidence.

High-priority alerts to configure:

  • Failed login attempts above threshold
  • Deployment outside business hours
  • Permission changes
  • New API key creation
  • Access from unusual locations
  • Bulk data exports

ISO 27001 is a management system standard. It requires organizations to identify information security risks and implement appropriate controls. It does not prescribe specific technical implementations such as dedicated tenancy.

Shared tenancy with audited isolation is sufficient for most compliance scenarios, including ISO 27001 certifications, standard SOC 2, typical SaaS workloads, and startup to mid-market companies.

You may need dedicated tenancy when:

  • Customer contracts explicitly require it
  • Government or defense workloads
  • Highly regulated sectors with specific isolation mandates
  • Maximum isolation is a business requirement

The key is documenting your decision and risk assessment. ISO 27001 requires you to evaluate and justify your tenancy choice, not to pick a particular answer.

What to verify about shared tenancy isolation:

  • Workloads run in isolated containers or VMs
  • Separate network namespaces
  • Separate storage volumes
  • Separate process spaces
  • Provider's SOC 2 describes and audits isolation mechanisms

The decision depends on your specific risk profile. ISO 27001 requires you to make and document that decision, not to choose a particular answer.

PCI-DSS (Payment Card Industry Data Security Standard) compliance depends on how you handle cardholder data. The scope of your compliance obligations narrows dramatically if you never store, process, or transmit card numbers directly.

PCI scope comparison:

Integration TypeSAQ TypeWhy This SAQ AppliesEffort LevelServerless OK?
Redirect to hosted payment pageSAQ ACustomer enters card data only on the PSP’s domain. Your environment never touches or can’t modify the payment page.Very low — short questionnaireYes
Embedded iframe (hosted fields)SAQ A-EPCard data is entered into PSP-hosted elements, but your website still loads scripts and can influence the customer’s payment experience.Moderate — more controls requiredYes
Direct API using tokenization (JS library creates token in browser)SAQ A-EPCard data bypasses your servers, but your frontend handles the payment page context, so compromise could alter form behavior.Moderate — similar to iframe burdenYes
Direct card handling (your server receives PAN before sending to PSP)SAQ D / ROCYour systems store/handle/transmit raw card data — full PCI scope applies.Very high — months-long auditDepends on implementation and controls

For reduced-scope (tokenized) compliance, verify:

  • Card numbers never touch your infrastructure
  • Payment processor is PCI Level 1 certified
  • API keys for payment processor are secured
  • Application does not log or store card data accidentally
  • Error messages do not expose card details

Clarify your scope before making infrastructure decisions.

HIPAA compliance for cloud deployments requires a Business Associate Agreement (BAA) between you and your cloud provider. Without a BAA, you cannot use the platform for PHI (Protected Health Information) workloads regardless of its security controls.

Provider Responsibility (via BAA)Your Responsibility
Infrastructure securityApplication access controls
Platform encryptionPHI encryption configuration
Physical securityWorkforce training
Their incident responseYour policies and procedures
Audit logging implementation
Minimum necessary access
  • BAA executed before deploying PHI
  • Encryption at rest for all PHI storage
  • Encryption in transit for all PHI transmission
  • Access controls limit PHI to workforce with legitimate need
  • Audit logging captures PHI access
  • Backup and recovery procedures documented
  • Incident response plan includes PHI breach procedures

Some providers restrict support access when a BAA is in effect to avoid exposing PHI during troubleshooting. Understand these restrictions before signing.

Role-based access control on a managed platform operates differently than on self-managed infrastructure. You don't configure IAM policies for individual functions. Instead, you work with the platform's team permissions, environment separation, and service-level secrets. The goal remains least privilege, but the mechanisms are different.

Separate production and staging at the project level

Most PaaS platforms scope permissions to projects or workspaces. If your platform only offers broad roles (admin/member) without environment-level granularity, create separate projects for production and staging. This lets you give engineers full access to staging while restricting production access to a smaller group.

Each service should have access only to the secrets it needs. A background worker processing jobs should not have access to your Stripe API key if it never handles payments. On platforms where environment variables are set per-service, this is straightforward. On platforms where environment variables are shared across a project, you need to be more deliberate about what you expose.

Common mistakes:

  • Sharing a single database connection string across all services when some only need read access
  • Giving every service access to third-party API keys "for convenience"
  • Using the same credentials for staging and production services

CI/CD pipelines need credentials to deploy. These tokens are service accounts and should follow the same principles:

  • Create separate tokens for staging and production deployments
  • Scope tokens to the minimum permissions needed (deploy, not team management)
  • Store tokens in your CI/CD platform's secrets management, not in repository code
  • Rotate tokens on a schedule and immediately when engineers with access leave
  • Audit token usage for deployments you don't recognize

If your platform offers project-scoped or environment-scoped tokens, use them. A token that can deploy to staging should not be able to deploy to production.

Platform-level MFA is useful but limited. If your team authenticates through an identity provider (Okta, Azure AD, Google Workspace), enforce MFA there. This gives you:

  • Consistent MFA policy across all tools, not just your deployment platform
  • Conditional access rules (require MFA for production access, allow remembered devices for staging)
  • Centralized audit logs for authentication events
  • Automatic access revocation when someone leaves the organization

For platforms supporting SAML, enable it and consider enforcing SAML Strict Mode to prevent users from bypassing SSO with direct login.

Access reviews are a compliance control, but they're only useful if you ask specific questions:

  • Who has access to production? For each person, what is the business justification?
  • What deployment tokens exist? Which CI/CD pipelines use them? Are any orphaned?
  • Who has access to view or modify secrets? Is this the same list as production deploy access, or broader?
  • Have any contractors or former employees retained access?
  • Are there any shared accounts or credentials that multiple people use?

Document the review and any changes made. This documentation is audit evidence.

On raw serverless infrastructure (e.g. AWS Lambda with IAM), you can assign each function its own IAM role with specific permissions. A function that reads from S3 gets s3:GetObject on specific buckets. A function that writes to DynamoDB gets dynamodb:PutItem on specific tables. This is fine-grained but operationally complex.

On a managed PaaS, you typically don't get function-level IAM. You get service-level separation: each service has its own environment variables, its own network identity, and its own scaling. The practical approach is to treat each service as a permission boundary. If two pieces of code need different access levels, deploy them as separate services rather than trying to manage permissions within a single service.

The tradeoff is coarser granularity in exchange for simpler operations. For most teams, this is the right tradeoff. If you need function-level IAM policies, you're likely building on raw cloud infrastructure rather than a managed platform.

Railway is SOC 2 Type II and SOC 3 certified, with HIPAA compliance and GDPR support. BAAs and penetration test reports are available upon request, and you can view audit, compliance, security, and regulatory documents at trust.railway.com. The platform is used by companies across regulated industries including healthcare, finance, and government, with 23% of Fortune 500 companies running workloads on Railway. Learn more about Railway's enterprise capabilities and check out the documentation to learn more.

FrameworkStatusHow to Access
SOC 2 Type IICertifiedRequest via Trust Center
SOC 3CertifiedPublic download
HIPAABAA availableAdd-on with spend threshold
GDPRDPA availableSelf-service via DocuSign
EU-US Data Privacy FrameworkParticipantDocumented in Trust Center
Swiss-US Data Privacy FrameworkParticipantDocumented in Trust Center
EU DORADocumentation availableUnder NDA

For HIPAA workloads, Railway offers Business Associate Agreements as an add-on. When a BAA is in effect, Railway team members can no longer directly access running workloads.

The GDPR Data Processing Agreement is available via self-service at railway.com/legal/dpa. A full subprocessor list is published in the Trust Center, including Stripe for billing, Cloudflare for CDN, and Google Cloud for infrastructure.

The Trust Center at trust.railway.com provides access to:

  • SOC 2 Type II Report (request access)
  • SOC 3 Report (public download)
  • Penetration test report by IO Active (request access)
  • HIPAA Report (request access)
Railway’s Trust Center compliance badges - trust.railway.com

Railway’s Trust Center compliance badges - trust.railway.com

ControlImplementation
Encryption at restAES-256
Encryption in transitTLS, fails closed on interruption
Secrets managementCloud provider KMS
Data backupsReal-time with daily full and weekly incremental
Data erasureSecure deletion protocols
Physical securityEquinix data centers with two-factor access at multiple checkpoints

Railway supports SAML-based SSO for centralized authentication. Organizations can enforce SAML Strict Mode to require all users authenticate through SSO. MFA is mandatory for users accessing production environments. Team management uses role-based access control with environment-level permissions.

Railway implements zero-trust networking with:

  • End-to-end encryption
  • Automated firewall rules on private networking
  • Network isolation between projects and environments
  • DDoS protection at the edge
  • 50ms p95 global network RTT

Railway's security practices include vulnerability and patch management with regular scans, code analysis through peer reviews and static/dynamic testing, secure development training for personnel, and a bug bounty program for responsible disclosure at bugbounty@railway.com. A Web Application Firewall and bot detection protect against automated threats.

Railway maintains audit logging across support interactions, web applications, technical operations, and production infrastructure. Logs capture deployment events, configuration changes, and access patterns.

Hosting Options

For organizations with specific isolation or compliance requirements, Railway offers three hosting models:

OptionDescriptionUse Case
Railway-managed infrastructureStandard cloud offeringMost workloads
Railway-managed dedicated VMsWorkloads run on dedicated hostsComplete resource isolation
Bring your own cloudRailway deploys within your VPCUltimate compliance control, use existing cloud commitments

Railway supports multiple environments per project, allowing you to isolate staging, production, and feature work within a single project structure. This addresses the environment separation requirements discussed earlier in this guide.

Environment types:

  • Duplicate an existing environment to create staging or testing copies with identical service configuration
  • Create empty environments for custom setups
  • PR environments spin up automatically when you open a pull request and are deleted when the PR is merged or closed

Environment RBAC (Enterprise):

For production environments handling sensitive data, Railway Enterprise offers restricted environments. Non-admin members can see that these environments exist but cannot access their resources: variables, logs, metrics, services, or configurations. They can still trigger deployments via git push, maintaining deployment workflows while restricting access to production data.

RoleCan access restricted environmentCan toggle restriction
AdminYesYes
MemberNoNo
DeployerNoNo

Workspace roles:

At the workspace level, Railway provides three roles with different permission boundaries:

PermissionAdminMemberDeployer
Automatic GitHub deploymentsYesYesYes
CLI deploymentsYesYesNo
Create/modify/delete variablesYesYesNo
Modify service settingsYesNoNo
View logsYesYesNo
Create servicesYesYesNo
Delete services/projectsYesNoNo
Manage team membersYesNoNo
Access billingYesNoNo

The Deployer role is useful for CI/CD service accounts or contractors who should trigger deployments but not access logs or configuration.

Enterprise customers receive 24/7 support with 1-hour response times, contractual SLAs, and a private Slack channel. The enterprise tier offers 99.99% SLA with resource limits of 1TB RAM, 1000 vCPU, and 50TB disk.

Review compliance documentation at trust.railway.com. For HIPAA BAAs, enterprise agreements, or compliance questions, contact team@railway.com or schedule time at cal.com/team/railway/work-with-railway.

Compliance on managed infrastructure is achievable when you understand the shared responsibility model and verify that your provider's controls align with your framework requirements. The questions in this guide provide a checklist for evaluating providers and structuring your own implementation work.