Toast
ContributorPatterns

Storage Patterns

Storage Patterns

This guide covers file storage in Toast, including configuration for Cloudflare R2 and other S3-compatible providers.

Overview

Toast uses a pluggable storage driver system for file uploads. The S3 driver works with any S3-compatible provider:

  • Development: MinIO (via Docker)
  • Production: Cloudflare R2, AWS S3, Backblaze B2

All uploads are scoped by tenant (siteId) for multi-tenant isolation.

Storage Key Format

Files are stored with keys following this pattern:

{siteId}/images/{year}/{month}/{nanoid}.{ext}

Example: 550e8400-e29b-41d4-a716-446655440000/images/2024/01/abc123def456.jpg

This ensures:

  • Multi-tenant isolation: Each site's files are in a separate prefix
  • Organized structure: Files grouped by date for easier management
  • Unique filenames: nanoid prevents collisions

Local Development with MinIO

MinIO is included in the Docker Compose setup. Start it with:

pnpm dx

Default configuration (in .env):

STORAGE_DRIVER=toast-driver-storage-s3
S3_ENDPOINT=http://localhost:9000
S3_ACCESS_KEY_ID=minioadmin
S3_SECRET_ACCESS_KEY=minioadmin
S3_BUCKET=toast-uploads
S3_REGION=us-east-1
S3_PUBLIC_URL=http://localhost:9000/toast-uploads

MinIO Console: http://localhost:9001 (minioadmin/minioadmin)

Production Setup with Cloudflare R2

1. Create an R2 Bucket

  1. Go to Cloudflare Dashboard > R2
  2. Click Create bucket
  3. Name it (e.g., toast-uploads)
  4. Choose a location hint (nearest to your users)

2. Enable Public Access

For images to be publicly viewable, you have two options:

Option A: Development URL (Staging/Testing)

  1. Go to bucket Settings > Public access
  2. Enable Public Development URL
  3. Note your URL: https://pub-{hash}.r2.dev

⚠️ The r2.dev URL is rate-limited and not recommended for production. It's fine for staging and testing.

Option B: Custom Domain (Production)

For production workloads, connect a custom domain:

  1. Go to bucket Settings > Custom domains
  2. Click Connect domain
  3. Enter your subdomain (e.g., uploads.example.com)
  4. Cloudflare handles SSL and provides full CDN caching

Custom domains support Cloudflare features like caching, Access policies, and are not rate-limited.

3. Create API Token

  1. Go to R2 > Manage R2 API Tokens
  2. Click Create API Token
  3. Select permissions:
    • Object Read & Write (for upload/delete)
  4. Restrict to your bucket if desired
  5. Copy the Access Key ID and Secret Access Key

4. Configure Bucket CORS (Usually Not Needed)

Note: Toast uses server-mediated uploads (browser → API → R2), so bucket-level CORS is typically not required. The API handles CORS for browser requests. Bucket CORS is only needed if you implement direct browser-to-R2 uploads with presigned URLs in the future.

If you do need bucket CORS (e.g., for direct image loading from different origins):

  1. Go to bucket Settings > CORS policy
  2. Add a rule allowing GET requests:
[
  {
    "AllowedOrigins": ["*"],
    "AllowedMethods": ["GET", "HEAD"],
    "AllowedHeaders": ["*"],
    "MaxAgeSeconds": 86400
  }
]

5. Set Environment Variables

On your API service (Railway, Docker, etc.):

STORAGE_DRIVER=toast-driver-storage-s3
S3_ENDPOINT=https://{account-id}.r2.cloudflarestorage.com
S3_ACCESS_KEY_ID=your-access-key-id
S3_SECRET_ACCESS_KEY=your-secret-access-key
S3_BUCKET=toast-uploads
S3_REGION=auto
# Use r2.dev for staging, custom domain for production:
S3_PUBLIC_URL=https://pub-{hash}.r2.dev
# S3_PUBLIC_URL=https://uploads.example.com
VariableDescription
STORAGE_DRIVERMust be toast-driver-storage-s3
S3_ENDPOINTR2 API endpoint (find in R2 bucket overview)
S3_ACCESS_KEY_IDFrom API token creation
S3_SECRET_ACCESS_KEYFrom API token creation
S3_BUCKETYour bucket name
S3_REGIONUse auto for R2
S3_PUBLIC_URLPublic URL (r2.dev for staging, custom domain for production)

6. Test the Configuration

Upload an image through the admin panel and verify:

  1. The file appears in your R2 bucket
  2. The public URL is accessible
  3. The file is stored under {siteId}/images/...

Other S3-Compatible Providers

AWS S3

S3_ENDPOINT=https://s3.us-east-1.amazonaws.com
S3_REGION=us-east-1
S3_PUBLIC_URL=https://your-bucket.s3.amazonaws.com

Backblaze B2

S3_ENDPOINT=https://s3.us-west-001.backblazeb2.com
S3_REGION=us-west-001
S3_PUBLIC_URL=https://f001.backblazeb2.com/file/your-bucket

Retry Logic

The S3 driver automatically retries on transient errors:

  • Timeout errors: TimeoutError, RequestTimeout
  • Service errors: ServiceUnavailable, InternalError
  • Rate limiting: SlowDown, ThrottlingException
  • Network errors: ECONNRESET, ETIMEDOUT, socket hang up
  • HTTP 5xx errors: 500, 502, 503, 504

Retries use exponential backoff with jitter:

  • Max retries: 3
  • Base delay: 100ms
  • Max delay: 5000ms

Non-transient errors (like AccessDenied) are not retried.

Multi-Tenancy

All storage operations are scoped by siteId:

  1. Keys are prefixed: Every file key starts with {siteId}/
  2. Isolation enforced at controller level: The upload controller extracts siteId from the authenticated session
  3. No cross-tenant access: A site cannot access another site's files (enforced by key prefixing)

If you need to migrate or manage files across sites, you'll need direct bucket access with appropriate permissions.

Signed URLs (Future)

Currently, all Toast uploads are public (images in posts). The storage driver interface intentionally doesn't include getSignedUrl because:

  1. All current content (images) is meant to be publicly viewable
  2. Signed URLs add complexity and expiration management
  3. Public URLs work with CDN caching

If private content support is needed in the future (e.g., member-only downloads), the approach would be:

  1. Add getSignedUrl(key: string, expiresIn: number) to StorageDriver interface
  2. Store private files with a different key prefix (e.g., {siteId}/private/)
  3. Generate signed URLs on-demand when authenticated users request access
  4. Consider using Cloudflare Access or similar for more robust access control

This is documented as a future consideration, not a current requirement.

Troubleshooting

"Storage driver not configured"

The STORAGE_DRIVER environment variable is not set or the driver package isn't installed.

# Verify the package is installed
pnpm list toast-driver-storage-s3

# Check the environment variable
echo $STORAGE_DRIVER

"AccessDenied" errors

  1. Verify your API token has Object Read & Write permissions
  2. Check the token is scoped to the correct bucket
  3. Verify the endpoint URL matches your R2 account

"CORS error" when viewing images

Add CORS rules to your R2 bucket (see section above).

Images not loading from R2

  1. Verify public access is enabled
  2. Check S3_PUBLIC_URL matches your bucket's public URL
  3. Ensure the bucket name is correct

Uploads timing out

The driver retries automatically, but if timeouts persist:

  1. Check network connectivity to the S3 endpoint
  2. Verify file sizes are within limits (default 5MB)
  3. Check if the provider is experiencing issues

On this page