Uploading files to AWS S3 with Presigned URLs (Extended Version)

Introduction

In my previous post, I showed you how to upload files to AWS S3 using presigned URLs. Today, we’re taking it a step further: creating a permanent file URL that doesn't expire. This is perfect for when you need to save a link to your database and use it anytime.

There are two main ways to handle this:

1. Using only AWS S3 for storage and access

Pros:

  • Easy to set up; everything is managed within S3.
  • Great for internal apps or dev environments where high security and performance aren't a priority yet.

Cons: 

  • Inconsistent Speeds: If a user in London tries to access a bucket in Singapore, it will be slow.
  • Security Risks: If your S3 link is leaked, someone could spam requests, costing you a lot of money. S3 isn't built to handle DDoS attacks.

2. Using S3 combined with CloudFront (CDN)

Pros:

  • Better User Experience: Website images load almost instantly because they are cached on servers worldwide.
  • Lower Costs: CloudFront handles repeated requests so your S3 bucket doesn't have to work as hard.
  • High Security: You can integrate AWS WAF to block bots and hackers.

Cons: 

  • Update Lag: If you change a file in S3, CloudFront might still show the old version (you'll need to run an "Invalidation").
  • More Complex Setup: Requires setting up Origin Access Control (OAC) and Certificates.


Implementation Details

First, create lib/aws-s3-public-stack.ts to set up your S3 resource:

import * as cdk from "aws-cdk-lib";
import * as s3 from "aws-cdk-lib/aws-s3";
import { Construct } from "constructs";

export class AwsS3PublicStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    const bucket = new s3.Bucket(this, "S3PublicBucket", {
      bucketName: "bucket-public-cb2a91d6",
      publicReadAccess: true,
      blockPublicAccess: new s3.BlockPublicAccess({
        blockPublicAcls: false,
        ignorePublicAcls: false,
        blockPublicPolicy: false,
        restrictPublicBuckets: false,
      }),

      cors: [
        {
          allowedMethods: [s3.HttpMethods.PUT],
          allowedOrigins: ["*"],
          allowedHeaders: ["*"],
        },
      ],

      removalPolicy: cdk.RemovalPolicy.DESTROY,
      autoDeleteObjects: true,
    });
    new cdk.CfnOutput(this, "BucketName", { value: bucket.bucketName });
  }
}

Explanation:

  • publicReadAccess combined with blockPublicAccess settings removes restrictions, allowing people to view files directly via the bucket URL.
  • CORS: If you want to limit which domains can upload files, change the allowedOrigins value to your specific domain.


Next, create the CloudFront version in lib/aws-s3-cloudfront-stack.ts:

import * as cdk from 'aws-cdk-lib';
import * as s3 from 'aws-cdk-lib/aws-s3';
import * as cloudfront from 'aws-cdk-lib/aws-cloudfront';
import * as origins from 'aws-cdk-lib/aws-cloudfront-origins';
import { Construct } from 'constructs';

export class AwsS3BucketCloudfrontStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    const myBucket = new s3.Bucket(this, 'S3BucketCloudfront', {
      bucketName: "bucket-public-cloudfront-cb2a91d6",
      removalPolicy: cdk.RemovalPolicy.DESTROY,
      autoDeleteObjects: true,
      cors: [{
        allowedMethods: [s3.HttpMethods.PUT, s3.HttpMethods.GET],
        allowedOrigins: ['*'],
        allowedHeaders: ['*'],
      }],
    });

    const distribution = new cloudfront.Distribution(this, 'MyDistribution', {
      defaultBehavior: {
        origin: origins.S3BucketOrigin.withOriginAccessControl(myBucket),
        viewerProtocolPolicy: cloudfront.ViewerProtocolPolicy.REDIRECT_TO_HTTPS,
        cachePolicy: cloudfront.CachePolicy.CACHING_OPTIMIZED,
      },
    });

    new cdk.CfnOutput(this, 'DistributionDomain', { value: distribution.distributionDomainName });
    new cdk.CfnOutput(this, 'BucketName', { value: myBucket.bucketName });
  }
}

Explanation:

  • Distribution: This connects CloudFront to S3. Since it's an internal connection using OAC, the bucket doesn't need to be public, making it much more secure.
  • CachePolicy: This defaults to a 1-day cache and supports Gzip/Brotli compression to make your files smaller and faster for browsers to download.


Update your bin/aws-cdk.ts file to include both:

#!/usr/bin/env node
import * as cdk from "aws-cdk-lib/core";
import { AwsS3PublicStack } from "../lib/aws-s3-public-stack";
import { AwsS3BucketCloudfrontStack } from "../lib/aws-s3-cloudfront-stack";

const app = new cdk.App();
new AwsS3PublicStack(app, "AwsS3PublicStack");
new AwsS3BucketCloudfrontStack(app, "AwsS3BucketCloudfrontStack");


Once this is ready, deploy these resources to AWS.


Connecting to NestJS

Now, update your s3.service.ts file. Notice the two options for the host variable, you can switch between them to test the S3 direct link versus the CloudFront link.

import {Injectable} from '@nestjs/common'
import {getSignedUrl} from '@aws-sdk/s3-request-presigner'
import {ConfigService} from '@nestjs/config'
import {
  S3Client,
  PutObjectCommand,
  ListObjectsV2Command,
  GetObjectCommand,
} from '@aws-sdk/client-s3'

@Injectable()
export class S3Service {
  private s3Client: S3Client
  private bucket = ''
  private host = ''

  constructor(private configService: ConfigService) {
    const region = this.configService.get<string>('REGION') || ''
    const accessKeyId = this.configService.get<string>('ACCESS_KEY_ID') || ''
    const secretAccessKey =
      this.configService.get<string>('SECRET_ACCESS_KEY') || ''
    this.bucket = this.configService.get<string>('BUCKET') || ''
   
    this.host = `https://${this.bucket}.s3.${region}.amazonaws.com`
    // this.host = this.configService.get<string>('CLOUDFRONT') || ''

    this.s3Client = new S3Client({
      region,
      credentials: {
        accessKeyId,
        secretAccessKey,
      },
    })
  }

  async createPresignedUrl(fileName: string) {
    const fileKey = `uploads/${Date.now()}-${fileName}`
    const command = new PutObjectCommand({
      Bucket: this.bucket,
      Key: fileKey,
    })
    const uploadUrl = await getSignedUrl(this.s3Client, command, {
      expiresIn: 60,
    })
    const publicUrl = `${this.host}/${fileKey}`
    return {
      uploadUrl,
      publicUrl,
    }
  }
}


Finally, make sure your .env file is updated with the correct credentials and URLs:

REGION = REGION
ACCESS_KEY_ID = ACCESS_KEY_ID
SECRET_ACCESS_KEY = SECRET_ACCESS_KEY
BUCKET = BUCKET
CLOUDFRONT = CLOUDFRONT


Now you can test your API using Postman



After uploading, you can access the file directly as follows:



Your resources should now be live on AWS.



Feel free to leave your thoughts in the comments! Don't forget to like, share, and follow for more tech content.

See more articles here.

Comments

Popular posts from this blog

All practice series

Deploying a NodeJS Server on Google Kubernetes Engine

Setting up Kubernetes Dashboard with Kind

Using Kafka with Docker and NodeJS

Monitoring with cAdvisor, Prometheus and Grafana on Docker

Kubernetes Practice Series

Kubernetes Deployment for Zero Downtime

Practicing with Google Cloud Platform - Google Kubernetes Engine to deploy nginx

NodeJS Practice Series

Helm for beginer - Deploy nginx to Google Kubernetes Engine