Storage Configuration
Configure file storage for uploaded documents and signed PDFs using database storage (default) or S3-compatible object storage.
Storage Options
| Backend | Best For | Scalability | Configuration |
|---|---|---|---|
database | Small deployments, simplicity | Limited | None required |
s3 | Production, large files, backups | High | Required |
Select the storage backend with the NEXT_PUBLIC_UPLOAD_TRANSPORT environment variable:
# Database storage (default)
NEXT_PUBLIC_UPLOAD_TRANSPORT=database
# S3-compatible storage
NEXT_PUBLIC_UPLOAD_TRANSPORT=s3Database Storage
Database storage is the default option and requires no additional configuration. Documents are stored as base64-encoded data directly in PostgreSQL.
- No external dependencies
- Simple deployment
- Automatic backups with database
- Increases database size significantly
- Slower for large files
- Database backup/restore takes longer
- Not recommended for files larger than 10MB
Configuration
No configuration required. Database storage is enabled when NEXT_PUBLIC_UPLOAD_TRANSPORT is unset or set to database.
S3 Configuration
S3 storage is recommended for production deployments. Documenso supports AWS S3 and any S3-compatible storage service.
Required Variables
| Variable | Description |
|---|---|
NEXT_PUBLIC_UPLOAD_TRANSPORT | Set to s3 |
NEXT_PRIVATE_UPLOAD_BUCKET | S3 bucket name |
NEXT_PRIVATE_UPLOAD_REGION | AWS region (default: us-east-1) |
NEXT_PRIVATE_UPLOAD_ACCESS_KEY_ID | AWS access key ID |
NEXT_PRIVATE_UPLOAD_SECRET_ACCESS_KEY | AWS secret access key |
Optional Variables
| Variable | Description | Default |
|---|---|---|
NEXT_PRIVATE_UPLOAD_ENDPOINT | Custom S3 endpoint for S3-compatible services | |
NEXT_PRIVATE_UPLOAD_FORCE_PATH_STYLE | Use path-style URLs instead of virtual-hosted | false |
NEXT_PRIVATE_UPLOAD_REGION | S3 region | us-east-1 |
AWS S3 Setup
Create an S3 Bucket
Create a bucket in the AWS Console or using the CLI:
aws s3 mb s3://your-documenso-bucket --region us-east-1Configure Bucket Policy
Block public access and configure CORS for presigned URL uploads:
CORS Configuration:
[
{
"AllowedHeaders": ["*"],
"AllowedMethods": ["GET", "PUT", "POST"],
"AllowedOrigins": ["https://your-documenso-domain.com"],
"ExposeHeaders": ["ETag"]
}
]Apply via AWS Console (Bucket > Permissions > CORS configuration) or CLI:
aws s3api put-bucket-cors --bucket your-documenso-bucket --cors-configuration file://cors.jsonCreate IAM User
Create an IAM user with programmatic access and attach this policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:PutObject", "s3:GetObject", "s3:DeleteObject"],
"Resource": "arn:aws:s3:::your-documenso-bucket/*"
}
]
}Configure Environment Variables
NEXT_PUBLIC_UPLOAD_TRANSPORT=s3
NEXT_PRIVATE_UPLOAD_BUCKET=your-documenso-bucket
NEXT_PRIVATE_UPLOAD_REGION=us-east-1
NEXT_PRIVATE_UPLOAD_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
NEXT_PRIVATE_UPLOAD_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEYMinIO Setup
MinIO is a self-hosted S3-compatible object storage server.
Deploy MinIO
Using Docker:
docker run -d \
--name minio \
-p 9000:9000 \
-p 9001:9001 \
-e MINIO_ROOT_USER=minioadmin \
-e MINIO_ROOT_PASSWORD=minioadmin \
-v minio_data:/data \
minio/minio server /data --console-address ":9001"Using Docker Compose with Documenso:
services:
minio:
image: minio/minio
command: server /data --console-address ":9001"
ports:
- '9000:9000'
- '9001:9001'
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin
volumes:
- minio_data:/data
volumes:
minio_data:Create a Bucket
Access the MinIO Console at http://localhost:9001 and create a bucket, or use the CLI:
# Install MinIO client
mc alias set myminio http://localhost:9000 minioadmin minioadmin
# Create bucket
mc mb myminio/documensoConfigure Environment Variables
NEXT_PUBLIC_UPLOAD_TRANSPORT=s3
NEXT_PRIVATE_UPLOAD_BUCKET=documenso
NEXT_PRIVATE_UPLOAD_ENDPOINT=http://minio:9000
NEXT_PRIVATE_UPLOAD_FORCE_PATH_STYLE=true
NEXT_PRIVATE_UPLOAD_ACCESS_KEY_ID=minioadmin
NEXT_PRIVATE_UPLOAD_SECRET_ACCESS_KEY=minioadmin
NEXT_PRIVATE_UPLOAD_REGION=us-east-1Set NEXT_PRIVATE_UPLOAD_FORCE_PATH_STYLE=true for MinIO and other S3-compatible services that
don't support virtual-hosted bucket URLs.
Other S3-Compatible Services
Documenso works with any S3-compatible storage service. Configure the endpoint and enable path-style URLs if required.
Cloudflare R2
NEXT_PUBLIC_UPLOAD_TRANSPORT=s3
NEXT_PRIVATE_UPLOAD_BUCKET=documenso
NEXT_PRIVATE_UPLOAD_ENDPOINT=https://<account-id>.r2.cloudflarestorage.com
NEXT_PRIVATE_UPLOAD_ACCESS_KEY_ID=your-r2-access-key
NEXT_PRIVATE_UPLOAD_SECRET_ACCESS_KEY=your-r2-secret-key
NEXT_PRIVATE_UPLOAD_REGION=autoDigitalOcean Spaces
NEXT_PUBLIC_UPLOAD_TRANSPORT=s3
NEXT_PRIVATE_UPLOAD_BUCKET=documenso
NEXT_PRIVATE_UPLOAD_ENDPOINT=https://nyc3.digitaloceanspaces.com
NEXT_PRIVATE_UPLOAD_ACCESS_KEY_ID=your-spaces-key
NEXT_PRIVATE_UPLOAD_SECRET_ACCESS_KEY=your-spaces-secret
NEXT_PRIVATE_UPLOAD_REGION=nyc3Backblaze B2
NEXT_PUBLIC_UPLOAD_TRANSPORT=s3
NEXT_PRIVATE_UPLOAD_BUCKET=documenso
NEXT_PRIVATE_UPLOAD_ENDPOINT=https://s3.us-west-004.backblazeb2.com
NEXT_PRIVATE_UPLOAD_ACCESS_KEY_ID=your-b2-key-id
NEXT_PRIVATE_UPLOAD_SECRET_ACCESS_KEY=your-b2-application-key
NEXT_PRIVATE_UPLOAD_REGION=us-west-004Wasabi
NEXT_PUBLIC_UPLOAD_TRANSPORT=s3
NEXT_PRIVATE_UPLOAD_BUCKET=documenso
NEXT_PRIVATE_UPLOAD_ENDPOINT=https://s3.us-east-1.wasabisys.com
NEXT_PRIVATE_UPLOAD_ACCESS_KEY_ID=your-wasabi-key
NEXT_PRIVATE_UPLOAD_SECRET_ACCESS_KEY=your-wasabi-secret
NEXT_PRIVATE_UPLOAD_REGION=us-east-1CloudFront CDN (Optional)
Use Amazon CloudFront to serve documents with lower latency and reduced S3 costs. CloudFront integration uses signed URLs for secure access.
Prerequisites
- An S3 bucket configured for Documenso
- A CloudFront distribution with the S3 bucket as origin
- A CloudFront key pair for signing URLs
Create a CloudFront Distribution
- Go to CloudFront in the AWS Console
- Create a distribution with your S3 bucket as the origin
- Configure Origin Access Control (OAC) to restrict direct S3 access
- Set the default cache behavior to allow GET requests
Create a Key Pair
CloudFront signed URLs require a key pair:
- Go to CloudFront > Key management > Public keys
- Create a new public key
- Create a key group containing the public key
- Associate the key group with your distribution
Keep the private key secure - you'll need it for the environment variable.
Configure Environment Variables
# CloudFront distribution domain (without https://)
NEXT_PRIVATE_UPLOAD_DISTRIBUTION_DOMAIN=d1234567890.cloudfront.net
# CloudFront key pair ID
NEXT_PRIVATE_UPLOAD_DISTRIBUTION_KEY_ID=K1234567890ABC
# Private key contents (PEM format)
NEXT_PRIVATE_UPLOAD_DISTRIBUTION_KEY_CONTENTS="-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEA...
-----END RSA PRIVATE KEY-----"Store the private key securely. Use environment variables or secrets management rather than committing it to version control.
How It Works
When CloudFront is configured:
Uploads
File uploads still go directly to S3 via presigned URLs.
Downloads
File downloads use CloudFront signed URLs.
Caching
CloudFront caches files at edge locations.
Expiration
Signed URLs expire after 1 hour.
Migration Between Storage Backends
Documenso does not provide automatic migration between storage backends. Each document's storage location is recorded in the database.
Documents uploaded to one storage backend cannot be automatically migrated to another. Plan your storage strategy before deploying to production.
Manual Migration Process
To migrate existing documents from database to S3 storage:
Export documents
Extract document blobs from the database (e.g. via a script querying DocumentData where type is BYTES_64).
Upload to S3
Upload each exported file to your S3 bucket and note the resulting object keys or paths.
Update DocumentData records
Point each record to the new S3 location by updating DocumentData with the S3 path and setting type to S3_PATH.
This requires custom scripts and database modifications.
For production deployments, we recommend starting with S3 storage from the beginning.
Hybrid Operation
During migration, Documenso can read from both backends. The DocumentData.type field indicates where each document is stored.
BYTES_64: Stored in databaseS3_PATH: Stored in S3
New uploads use the configured NEXT_PUBLIC_UPLOAD_TRANSPORT backend.
Storage Sizing
Database Storage Estimates
When using database storage, plan for significant database growth:
| Documents/Month | Avg Size | Monthly Growth | Annual Growth |
|---|---|---|---|
| 100 | 500KB | ~50MB | ~600MB |
| 1,000 | 500KB | ~500MB | ~6GB |
| 10,000 | 500KB | ~5GB | ~60GB |
Database storage includes base64 encoding overhead (~33% increase).
S3 Storage Estimates
S3 stores files without encoding overhead:
| Documents/Month | Avg Size | Monthly Growth | Annual Growth |
|---|---|---|---|
| 100 | 500KB | ~50MB | ~600MB |
| 1,000 | 500KB | ~500MB | ~6GB |
| 10,000 | 500KB | ~5GB | ~60GB |
Cost Comparison
For high-volume deployments, S3 is more cost-effective:
| Aspect | Database Storage | S3 Storage |
|---|---|---|
| Storage cost | Database pricing (~$0.10/GB) | S3 pricing (~$0.023/GB) |
| Transfer cost | Database I/O | S3 requests + egress |
| Backup cost | Larger database backups | Separate S3 backups |
| Performance | Degrades with size | Consistent |
Upload Size Limits
Configure the maximum upload size displayed to users:
NEXT_PUBLIC_DOCUMENT_SIZE_UPLOAD_LIMIT=10This value is in megabytes. The default is 5MB.
This environment variable controls the UI display. Actual limits may also be enforced by your reverse proxy, web server, or S3 configuration.
Ensure your infrastructure supports the configured limit:
Set client_max_body_size to match or exceed your upload limit.
Default object size limit is 5GB; multipart upload may be required for large files.
Default limit is 50MB per request.
Troubleshooting
See Also
- Database Configuration - Configure PostgreSQL
- Environment Variables - Complete configuration reference
- Backups - Backup strategies for both storage backends
- Docker Compose - Deploy with MinIO for local storage