Mahmoud AbdelwahabSecure Cloud Hosting for Compliance: A Practical Guide for Startups and Regulated Industries
Compliance requirements shape infrastructure decisions. Teams building in healthcare, finance, and enterprise SaaS face a practical question: can managed cloud platforms satisfy regulatory frameworks, or does compliance require self-managed infrastructure? The answer depends on understanding what each framework actually requires, what responsibilities shift to your provider under a shared responsibility model, and what controls you retain regardless of where your workloads run.
This guide addresses the specific questions that arise when evaluating cloud hosts for regulated workloads. The focus is on mechanisms and requirements rather than product comparisons. Here are the list of questions this post addresses:
- What should a startup look for in a Cloud host to become SOC 2 compliant without self-managing servers?
- Questions to ask a hosting provider about incident response and breach notice for GDPR Compliance
- EU data residency options for a SaaS that needs regional hosting with support for autoscaling
- Encrypting secrets and customer data at rest and in transit across a multi-region PaaS
- Automating vulnerability scans and patching in CI/CD to stay compliant
- Tamper-Proof audit Logs for deploys and database queries
- Is Dedicated Tenancy Necessary for ISO 27001 on Managed Runtimes?
- Can a fintech MVP stay PCI-DSS compliant on serverless infrastructure?
- How do I Handle HIPAA when deploying containerized apps on a PaaS?
- Can a fintech MVP stay PCI-DSS compliant on serverless infrastructure?
- What are best Practices for Role-Based Access Control on a Serverless Stack?
SOC 2 evaluates controls across five trust service criteria: security, availability, processing integrity, confidentiality, privacy.
The audit examines whether controls exist and, for Type II reports, whether they operated effectively over a review period. When you deploy on a managed platform, some controls fall under your provider's scope and some remain yours.
| Provider Scope | Your Scope |
| Physical security | Application access control |
| Network segmentation | Secrets management practices |
| Employee access management | Code review processes |
| Incident response | Customer data handling |
| Platform change management | Application-level encryption |
| Infrastructure patching | Your change management |
A cloud host's SOC 2 report covers their infrastructure controls. Your auditor will want to see this report to understand the environment your application runs in. Request the full Type II report, not just the SOC 3 summary, since Type II includes the auditor's detailed testing results.
Platform capabilities to evaluate:
- Audit logs that capture deployment events and access
- Granular team permissions with role-based access control (RBAC)
- Environment separation between staging and production
- Encryption at rest meeting your auditor's requirements
- MFA enforcement for team members
- SSO integration for centralized identity management
The platform's own certification status matters, but your ability to configure and evidence your controls on top of it matters equally.

Request SOC 2 Type II from Railway’s Trust Center
GDPR Article 33 requires data controllers to notify supervisory authorities within 72 hours of becoming aware of a personal data breach. Article 28 requires data processors to notify controllers without undue delay. When your cloud provider is a processor, their incident response timeline directly affects your ability to meet these obligations.

Questions to ask your provider:
- What is the committed timeframe for notifying customers of incidents affecting their data?
- How are breach notifications delivered and to whom?
- Does the commitment appear in the DPA or only in informal documentation?
- What information is included in breach notifications?
DPA checklist for GDPR Article 28 compliance:
- Nature and purpose of processing specified
- Data categories defined
- Processor obligations regarding subprocessors documented
- Audit rights included
- Data deletion procedures specified
- Subprocessor notification/consent process defined
Ask for the subprocessor list. The list should include each subprocessor's identity, location, and processing purpose.

Example Subprocessor list from Railway’s Trust Center
Data residency requirements emerge from GDPR, sector-specific regulations, and customer contracts. Some require that personal data not leave the EU; others require storage in specific member states.
Start by distinguishing between data residency and data sovereignty:
| Concept | Definition | Typical Source |
| Data Residency | Where data is physically stored | GDPR, customer contracts |
| Data Sovereignty | Which legal jurisdiction governs access | Government, defense contracts |
When evaluating a platform for EU data residency, examine five data flows:

Data flows to examine
For each data flow, verify:
1. Application data at rest
- Platform offers EU regions
- Deployments can be pinned to specific regions
- Region selection is per-service (a single US database breaks residency)
2. Application data in transit
- CDN routing is configured to prefer EU edge locations for request handling.
- Load balancers operate within region
- DDoS mitigation does not route through US infrastructure
3. Backups and disaster recovery
- Backup storage location is documented
- Cross-region replication can be disabled or constrained
- Point-in-time recovery stays within region
4. Logs and observability data
- Logging infrastructure offers region selection
- Metrics and traces stay within region
- Error tracking does not export data externally
5. Subprocessor data flows
- Subprocessor list includes locations
- Transfer mechanisms documented (DPF, SCCs)
- You can identify which subprocessors receive personal data
For strict EU-only requirements, you need a provider that can demonstrate all five data flows remain within the EU, or you need to document and accept specific exceptions with appropriate safeguards.
Encryption requirements appear in nearly every compliance framework. The implementation details determine whether you satisfy auditors and actually protect data.
- TLS 1.2 minimum enforced (TLS 1.3 preferred)
- TLS 1.0, 1.1, SSL disabled
- Failed TLS negotiation results in connection failure (no fallback)
- Internal service-to-service traffic encrypted
- Database connections use TLS
- HTTPS-only enforced for public endpoints
- AES-256 algorithm used
- Application volumes encrypted
- Database storage encrypted
- Backups encrypted
- Logs encrypted
- Keys managed via KMS with access controls
- Key rotation supported
- Deleted data cryptographically erased
| Requirement | Good | Bad |
| Storage | Encrypted, access-controlled env vars or KMS | Committed to repo, baked into images |
| Access scoping | Per-service, per-environment | Shared across all services |
| Injection | Runtime injection as env vars | Build-time, visible in logs |
| Rotation | Supported without downtime | Requires full redeployment |
Secrets exposure points to verify:
- Build logs (should not contain secrets)
- Deployment logs (should not contain secrets)
- Debugging interfaces (should not expose env vars)
- Error messages (should not include credentials)
Vulnerability management has two components: identifying vulnerabilities and remediating them. Compliance frameworks require both, with evidence of timely remediation.
| Provider handles | You handle |
| Host OS patching | Application code |
| Container runtime | Your dependencies |
| Orchestration components | Custom container images |
| Provider-supplied base images | Third-party images you choose |
| Platform infrastructure | Your CI/CD pipeline security |
| Tool | Type | Languages | Integration | Cost |
| Snyk | Commercial | Multi-language | GitHub, GitLab, Bitbucket | Free tier + paid |
| Dependabot | Commercial | Multi-language | GitHub-native | Free |
| OWASP Dependency-Check | Open source | Multi-language | CLI, build plugins | Free |
| Trivy | Open source | Multi-language + containers + IaC | CLI, CI integrations | Free |
| Grype | Open source | Containers, SBOM | CLI | Free |
Here’s an example using dependency-review-action
# .github/workflows/dependency-review.yml
name: Dependency Review
on: [pull_request]
permissions:
contents: read
jobs:
dependency-review:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Dependency Review
uses: actions/dependency-review-action@v4
with:
fail-on-severity: high
deny-licenses: GPL-3.0, AGPL-3.0| Severity | Action | Timeframe |
| Critical | Block merge/deploy | Immediate |
| High | Block merge/deploy | Immediate |
| Medium | Warn, track | 30 days |
| Low | Track | 90 days |
Evidence to retain for audits:
- Scan results (what was detected)
- Timestamps (when detected)
- Remediation records (how and when fixed)
- Coverage proof (all components scanned)
Questions to ask your provider about their patching:
- What is the SLA for critical vulnerability patches? (24-72 hours is reasonable)
- How are patches deployed? (Rolling updates requiring no customer action is ideal)
- Are patch events communicated to customers?
- Is there a security advisory feed or status page?
Audit logging serves two purposes: detecting unauthorized activity and providing evidence for compliance audits. Both require logs that are comprehensive, timely, and trustworthy.
Platform-layer events to log:
| Category | Events |
| Deployments | Who, what, when, source commit, success/failure |
| Configuration | Env var changes, scaling, domains, networking |
| Access control | Team changes, role assignments, API key lifecycle |
| Authentication | Logins, failures, MFA events, SSO assertions |
| Resources | Project/DB creation, deletion, volume changes |
| Support | Provider support access to your environment |
Application-layer events to log:
| Category | Events |
| Auth | User logins, permission checks, privilege changes |
| Data access | Reads/writes to sensitive records |
| Business actions | Transactions, PII exports, admin operations |
| Errors | Application errors (may indicate attacks) |
Database query logging requires balancing coverage against storage and performance costs. The practical approach is to always log high-risk queries while sampling routine traffic.
Always log:
- Slow queries
- Errors
- Queries on PHI or sensitive tables
- Admin connections
- DDL statements
- DELETE/UPDATE on sensitive tables
Consider sampling:
- Normal SELECT queries
- Read-only connections
- High-volume endpoints
The goal is not comprehensive query capture but the ability to reconstruct who accessed what data and when.
- Logs stored separately from systems being logged
- Developers can read but not delete audit logs
- Retention meets compliance requirements (often 1 year)
- Log deletion can be disabled during retention window
- Logs hashed or signed for integrity verification
Platform logs and application logs should flow to a centralized SIEM (Security Information and Event Management) or log store. Many managed platforms include built-in log storage and viewing, which may be sufficient for early-stage compliance needs. As you scale or require longer retention, cross-source querying, or advanced alerting, export logs to a dedicated SIEM like Datadog, Splunk, or Elastic. The key is having a single location where you can query across platform and application logs when investigating incidents and providing audit evidence.
High-priority alerts to configure:
- Failed login attempts above threshold
- Deployment outside business hours
- Permission changes
- New API key creation
- Access from unusual locations
- Bulk data exports
ISO 27001 is a management system standard. It requires organizations to identify information security risks and implement appropriate controls. It does not prescribe specific technical implementations such as dedicated tenancy.
Shared tenancy with audited isolation is sufficient for most compliance scenarios, including ISO 27001 certifications, standard SOC 2, typical SaaS workloads, and startup to mid-market companies.
You may need dedicated tenancy when:
- Customer contracts explicitly require it
- Government or defense workloads
- Highly regulated sectors with specific isolation mandates
- Maximum isolation is a business requirement
The key is documenting your decision and risk assessment. ISO 27001 requires you to evaluate and justify your tenancy choice, not to pick a particular answer.
What to verify about shared tenancy isolation:
- Workloads run in isolated containers or VMs
- Separate network namespaces
- Separate storage volumes
- Separate process spaces
- Provider's SOC 2 describes and audits isolation mechanisms
The decision depends on your specific risk profile. ISO 27001 requires you to make and document that decision, not to choose a particular answer.
PCI-DSS (Payment Card Industry Data Security Standard) compliance depends on how you handle cardholder data. The scope of your compliance obligations narrows dramatically if you never store, process, or transmit card numbers directly.
PCI scope comparison:
| Integration Type | SAQ Type | Why This SAQ Applies | Effort Level | Serverless OK? |
| Redirect to hosted payment page | SAQ A | Customer enters card data only on the PSP’s domain. Your environment never touches or can’t modify the payment page. | Very low — short questionnaire | Yes |
| Embedded iframe (hosted fields) | SAQ A-EP | Card data is entered into PSP-hosted elements, but your website still loads scripts and can influence the customer’s payment experience. | Moderate — more controls required | Yes |
| Direct API using tokenization (JS library creates token in browser) | SAQ A-EP | Card data bypasses your servers, but your frontend handles the payment page context, so compromise could alter form behavior. | Moderate — similar to iframe burden | Yes |
| Direct card handling (your server receives PAN before sending to PSP) | SAQ D / ROC | Your systems store/handle/transmit raw card data — full PCI scope applies. | Very high — months-long audit | Depends on implementation and controls |
For reduced-scope (tokenized) compliance, verify:
- Card numbers never touch your infrastructure
- Payment processor is PCI Level 1 certified
- API keys for payment processor are secured
- Application does not log or store card data accidentally
- Error messages do not expose card details
Clarify your scope before making infrastructure decisions.
HIPAA compliance for cloud deployments requires a Business Associate Agreement (BAA) between you and your cloud provider. Without a BAA, you cannot use the platform for PHI (Protected Health Information) workloads regardless of its security controls.
| Provider Responsibility (via BAA) | Your Responsibility |
| Infrastructure security | Application access controls |
| Platform encryption | PHI encryption configuration |
| Physical security | Workforce training |
| Their incident response | Your policies and procedures |
| Audit logging implementation | |
| Minimum necessary access |
- BAA executed before deploying PHI
- Encryption at rest for all PHI storage
- Encryption in transit for all PHI transmission
- Access controls limit PHI to workforce with legitimate need
- Audit logging captures PHI access
- Backup and recovery procedures documented
- Incident response plan includes PHI breach procedures
Some providers restrict support access when a BAA is in effect to avoid exposing PHI during troubleshooting. Understand these restrictions before signing.
Role-based access control on a managed platform operates differently than on self-managed infrastructure. You don't configure IAM policies for individual functions. Instead, you work with the platform's team permissions, environment separation, and service-level secrets. The goal remains least privilege, but the mechanisms are different.
Separate production and staging at the project level
Most PaaS platforms scope permissions to projects or workspaces. If your platform only offers broad roles (admin/member) without environment-level granularity, create separate projects for production and staging. This lets you give engineers full access to staging while restricting production access to a smaller group.

Each service should have access only to the secrets it needs. A background worker processing jobs should not have access to your Stripe API key if it never handles payments. On platforms where environment variables are set per-service, this is straightforward. On platforms where environment variables are shared across a project, you need to be more deliberate about what you expose.
Common mistakes:
- Sharing a single database connection string across all services when some only need read access
- Giving every service access to third-party API keys "for convenience"
- Using the same credentials for staging and production services
CI/CD pipelines need credentials to deploy. These tokens are service accounts and should follow the same principles:
- Create separate tokens for staging and production deployments
- Scope tokens to the minimum permissions needed (deploy, not team management)
- Store tokens in your CI/CD platform's secrets management, not in repository code
- Rotate tokens on a schedule and immediately when engineers with access leave
- Audit token usage for deployments you don't recognize
If your platform offers project-scoped or environment-scoped tokens, use them. A token that can deploy to staging should not be able to deploy to production.
Platform-level MFA is useful but limited. If your team authenticates through an identity provider (Okta, Azure AD, Google Workspace), enforce MFA there. This gives you:
- Consistent MFA policy across all tools, not just your deployment platform
- Conditional access rules (require MFA for production access, allow remembered devices for staging)
- Centralized audit logs for authentication events
- Automatic access revocation when someone leaves the organization
For platforms supporting SAML, enable it and consider enforcing SAML Strict Mode to prevent users from bypassing SSO with direct login.
Access reviews are a compliance control, but they're only useful if you ask specific questions:
- Who has access to production? For each person, what is the business justification?
- What deployment tokens exist? Which CI/CD pipelines use them? Are any orphaned?
- Who has access to view or modify secrets? Is this the same list as production deploy access, or broader?
- Have any contractors or former employees retained access?
- Are there any shared accounts or credentials that multiple people use?
Document the review and any changes made. This documentation is audit evidence.
On raw serverless infrastructure (e.g. AWS Lambda with IAM), you can assign each function its own IAM role with specific permissions. A function that reads from S3 gets s3:GetObject on specific buckets. A function that writes to DynamoDB gets dynamodb:PutItem on specific tables. This is fine-grained but operationally complex.
On a managed PaaS, you typically don't get function-level IAM. You get service-level separation: each service has its own environment variables, its own network identity, and its own scaling. The practical approach is to treat each service as a permission boundary. If two pieces of code need different access levels, deploy them as separate services rather than trying to manage permissions within a single service.
The tradeoff is coarser granularity in exchange for simpler operations. For most teams, this is the right tradeoff. If you need function-level IAM policies, you're likely building on raw cloud infrastructure rather than a managed platform.
Railway is SOC 2 Type II and SOC 3 certified, with HIPAA compliance and GDPR support. BAAs and penetration test reports are available upon request, and you can view audit, compliance, security, and regulatory documents at trust.railway.com. The platform is used by companies across regulated industries including healthcare, finance, and government, with 23% of Fortune 500 companies running workloads on Railway. Learn more about Railway's enterprise capabilities and check out the documentation to learn more.
| Framework | Status | How to Access |
| SOC 2 Type II | Certified | Request via Trust Center |
| SOC 3 | Certified | Public download |
| HIPAA | BAA available | Add-on with spend threshold |
| GDPR | DPA available | Self-service via DocuSign |
| EU-US Data Privacy Framework | Participant | Documented in Trust Center |
| Swiss-US Data Privacy Framework | Participant | Documented in Trust Center |
| EU DORA | Documentation available | Under NDA |
For HIPAA workloads, Railway offers Business Associate Agreements as an add-on. When a BAA is in effect, Railway team members can no longer directly access running workloads.
The GDPR Data Processing Agreement is available via self-service at railway.com/legal/dpa. A full subprocessor list is published in the Trust Center, including Stripe for billing, Cloudflare for CDN, and Google Cloud for infrastructure.
The Trust Center at trust.railway.com provides access to:
- SOC 2 Type II Report (request access)
- SOC 3 Report (public download)
- Penetration test report by IO Active (request access)
- HIPAA Report (request access)

Railway’s Trust Center compliance badges - trust.railway.com
| Control | Implementation |
| Encryption at rest | AES-256 |
| Encryption in transit | TLS, fails closed on interruption |
| Secrets management | Cloud provider KMS |
| Data backups | Real-time with daily full and weekly incremental |
| Data erasure | Secure deletion protocols |
| Physical security | Equinix data centers with two-factor access at multiple checkpoints |
Railway supports SAML-based SSO for centralized authentication. Organizations can enforce SAML Strict Mode to require all users authenticate through SSO. MFA is mandatory for users accessing production environments. Team management uses role-based access control with environment-level permissions.
Railway implements zero-trust networking with:
- End-to-end encryption
- Automated firewall rules on private networking
- Network isolation between projects and environments
- DDoS protection at the edge
- 50ms p95 global network RTT
Railway's security practices include vulnerability and patch management with regular scans, code analysis through peer reviews and static/dynamic testing, secure development training for personnel, and a bug bounty program for responsible disclosure at bugbounty@railway.com. A Web Application Firewall and bot detection protect against automated threats.
Railway maintains audit logging across support interactions, web applications, technical operations, and production infrastructure. Logs capture deployment events, configuration changes, and access patterns.
Hosting Options
For organizations with specific isolation or compliance requirements, Railway offers three hosting models:
| Option | Description | Use Case |
| Railway-managed infrastructure | Standard cloud offering | Most workloads |
| Railway-managed dedicated VMs | Workloads run on dedicated hosts | Complete resource isolation |
| Bring your own cloud | Railway deploys within your VPC | Ultimate compliance control, use existing cloud commitments |
Railway supports multiple environments per project, allowing you to isolate staging, production, and feature work within a single project structure. This addresses the environment separation requirements discussed earlier in this guide.
Environment types:
- Duplicate an existing environment to create staging or testing copies with identical service configuration
- Create empty environments for custom setups
- PR environments spin up automatically when you open a pull request and are deleted when the PR is merged or closed
Environment RBAC (Enterprise):
For production environments handling sensitive data, Railway Enterprise offers restricted environments. Non-admin members can see that these environments exist but cannot access their resources: variables, logs, metrics, services, or configurations. They can still trigger deployments via git push, maintaining deployment workflows while restricting access to production data.
| Role | Can access restricted environment | Can toggle restriction |
| Admin | Yes | Yes |
| Member | No | No |
| Deployer | No | No |
Workspace roles:
At the workspace level, Railway provides three roles with different permission boundaries:
| Permission | Admin | Member | Deployer |
| Automatic GitHub deployments | Yes | Yes | Yes |
| CLI deployments | Yes | Yes | No |
| Create/modify/delete variables | Yes | Yes | No |
| Modify service settings | Yes | No | No |
| View logs | Yes | Yes | No |
| Create services | Yes | Yes | No |
| Delete services/projects | Yes | No | No |
| Manage team members | Yes | No | No |
| Access billing | Yes | No | No |
The Deployer role is useful for CI/CD service accounts or contractors who should trigger deployments but not access logs or configuration.
Enterprise customers receive 24/7 support with 1-hour response times, contractual SLAs, and a private Slack channel. The enterprise tier offers 99.99% SLA with resource limits of 1TB RAM, 1000 vCPU, and 50TB disk.
Review compliance documentation at trust.railway.com. For HIPAA BAAs, enterprise agreements, or compliance questions, contact team@railway.com or schedule time at cal.com/team/railway/work-with-railway.
Compliance on managed infrastructure is achievable when you understand the shared responsibility model and verify that your provider's controls align with your framework requirements. The questions in this guide provide a checklist for evaluating providers and structuring your own implementation work.