AWS S3 Storage Classes: Everything You Need to Know

aws s3 storage classes
Sharing is Caring:

In this post, I will help you understand AWS S3 storage classes. Going through this will help you choose right storage class for your use case.

AWS S3 allows you to associate storage class with each object to optimize cost and performance.

  • S3 Storage Classes can be configured at the object level and a single bucket can contain objects stored across S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA.
  • You can also use S3 Lifecycle policies to automatically transition objects between storage classes without any application changes.
  • All the storage classes Supports SSL for data in transit and encryption of data at rest

Suggested Read:

List of Available AWS S3 Storage Classes

  • S3 Standard
  • Intelligent Tiering
  • Standard-IA
  • One Zone-IA
  • Glacier
  • Glacier Deep Archive
  • Reduced Redundancy Storage(Deprecated)

Let’s look into each one of them one by one.

S3 Standard

S3 standard is the default storage class when you upload an object and it’s designed for frequently accessed data

  • Offers low latency(in Milliseconds) and high throughout
  • Provides durability of 99.999999999%(11 9’s) by storing objects across multiple Availability Zones
  • Can survive one entire availability zone failure and concurrent data loss in two facility(data center)
  • Designed for 99.99% availability over a given year
  • No minimum storage duration
  • Suitable for general purpose like website hosting, big data analytics, mobile and web application storage etc.

Intelligent-Tiering

S3 Intelligent-Tiering is designed to optimize cost by moving objects between four access tiers when access patterns change.

  • Works by storing objects in four access tiers optimized for-
    • Frequent access
    • Infrequent access
    • Archive access(Optional)
    • Deep archive access(Optional)
  • Uploaded objects are stored in frequent access tier. If object is not accessed in 30 days, it moves the object to infrequent access tier
  • If you have opted for archive tiers, object that’s not access for 90 days is moved to archive and after 180 days into deep archive
  • Suitable for data sets with unknown storage access patterns like new applications, or unpredictable access patterns like data lakes. 
  • Suitable for objects larger than 128 KB
  • Minimum storage duration is 30 days
  • Designed for durability of 99.999999999% of objects across multiple Availability Zones
  • Designed for 99.9% availability over a given year
  • Small monthly monitoring and tiering fees applies

S3 Standard-IA

S3 Standard-IA is designed for long lived and infrequently accessed data, but requires rapid access when needed.

  • Provides millisecond latency similar to S3 Standard storage class
  • Suitable for objects larger than 128 KB that you plan to store for at least 30 days
  • Provides greater availability and resiliency than the S3 One Zone-IA class
  • Designed for durability of 99.999999999% of objects across multiple Availability Zones
  • Can survive one entire availability zone failure
  • Lower cost compared to S3 Standard
  • Designed for 99.9% availability over a given year
  • Works best for use cases like backup storage or disaster recovery storage

S3 One Zone-IA

Similar to S3 Standard-IA, S3 One Zone-IA is for data that is accessed less frequently and requires rapid access when needed. However, please note that, as the name says data is stored in a single availability zone

  • S3 One Zone-IA stores data in a single AZ and hence costs 20% less than S3 Standard-IA
  • Data is stored in a single availability zone hence data will be lost in case of availability zone failure
  • Suitable for objects larger than 128 KB that you plan to store for at least 30 days
  • 99.5% availability over a given year
  • Designed for durability of 99.999999999% of objects in a single Availability Zone and data is lost when the AZ destroys
  • Works best for storing secondary backup copy or data you can easily recreate

Glacier

Designed for Low cost object storage for archiving or backup of data where data needs to be infrequently accessed and retrieval time of minutes to hour is accepted

  • Data can be directly uploaded to Glacier through S3 PUT API or can be moved through lifecycle policy from any other storage class
  • Configurable retrieval times, from minutes to hours
    • Expedited – 1 to 5 minutes
    • Standard- 3-5 hours
    • Bulk- 5-12 hours
  • Minimum storage duration is 90 days and suitable for objects larger then 40KB
  • Each item on a glacier is called archive and stored in vault rather then bucket
  • Objects needs to be restored before accessing or opening and they are available for the days you request in your restoration request
  • Provides durability of 99.999999999%(11 9’s) by storing objects across multiple Availability Zones
  • Can survive failure of one entire availability zone
  • Designed for 99.99% availability over a given year

Glacier Deep Archive

Lowest cost storage class that’s designed for long term data retention and digital prservation like 7-10 years.

  • Data may be accessed once or twice an year
  • Data can be restored within 12 hours
  • Two retrieval options-
    • Standard – withing 12 hours
    • Bulk – Within 48 hours
  • Can be used as an alternative to magnetic tape systems
  • Minimum Storage Duration is 180 days and suitable for objects larger then 40KB
  • You can directly upload to S3 Glacier Deep Archive using S3 PUT API or use lifecycle policy to move from a different storage class
  • Provides durability of 99.999999999%(11 9’s) by storing objects across multiple Availability Zones
  • Can survive failure of one entire availability zone
  • Designed for 99.99% availability over a given year

Reduced Redundancy Storage (Deprecated)

You can still see this as an option in storage classes when uploading an object but AWS recommends not to use it.

  • It was designed for noncritical, reproducible data stored at lower levels of redundancy than the STANDARD storage class, which reduces storage costs
  • But now, As per AWS S3 Standard is more cost effective and should be used for all the above use cases

Conclusion

I am sure you understand AWS S3 storage classes better now. To recap, let’s summarize what we learnt.

  • AWS S3 offers wide range of storage classes for various use cases
  • S3 standard is highly durable, available and general purpose storage for frequently accessed data
  • Intelligent-Tiering is best for data with unknown or changing access pattern
  • S3 Standard-IA and S3 One Zone-IA is for long lived and infrequent access data
  • Glacier and Glacier Deep Archive is for archival and preservation
  • All the storage class provides 99.999999999% (11 9’s) of durability
  • S3 standard, Glacier and Glacier Deep archive provides 99.99% availability
  • S3 intelligent tiering and S3 Standard-IA provides 99.9% availability where as S3 One Zone-IA provides lowest of all ie 99.5% availability
  • Minimum object size of 40KB or 128KB is in terms of charges. If Minimum storage size is 40KB, even if you store 1KB file , it will be charged as 40KB. So 100 file of 1KB will be charged as 100*40=4000KB
  • All the storage class stores data in more then 3 availability zone except S3 One Zone-IA
  • All the storage class can survive failure of an entire availability zone except S3 Infrequent -one zone IA
  • All S3 storage classes support SSL encryption of data in transit and data encryption at rest

I hope you found this post helpful.

Enjoyed the content?

Subscribe to our newsletter below to get awesome AWS learning materials delivered straight to your inbox.

Subscribe to our newsletter below to get awesome AWS learning materials delivered straight to your inbox.

If you are reading this, Please Please motivate me by-

Suggested Read:

Sharing is Caring:

Leave a Reply

Your email address will not be published. Required fields are marked *