Posts

Read-After-Write consistency in S3

Amazon S3 provides strong read-after-write consistency for new objects and overwritten objects. This means that after a successful write of a new object or an overwrite of an existing object, any subsequent read request will immediately receive the latest version of the object. Some key points about the consistency model of Amazon S3: Strong Read-After-Write Consistency : S3 automatically delivers strong read-after-write consistency without any changes to performance or availability, and without sacrificing regional isolation for applications. Consistent Listing Operations : S3 also provides strong consistency for list operations, so after a write, you can immediately perform a listing of the objects in a bucket with any changes reflected. Durability and Availability : The S3 Standard storage class is designed for 99.99% availability, while other storage classes like S3 Standard-IA, S3 Intelligent-Tiering, and S3 Glacier Instant Retrieval are designed for 99.9% availability. All S3 sto...

Data flow In-and-Out of S3

  Here's how data flows in and out of Amazon S3: Uploading Data to S3 : You can upload data to S3 using various methods: AWS CLI: Using the  aws s3 cp  or  aws s3 mv  commands. AWS SDK: Integrating S3 upload functionality into your applications. AWS Management Console: Uploading files through the web-based console. S3 File Gateway: Allowing your on-premises applications to seamlessly store files as objects in S3. Downloading Data from S3 : You can download data from S3 using similar methods: AWS CLI: Using the  aws s3 cp  or  aws s3 mv  commands. AWS SDK: Integrating S3 download functionality into your applications. AWS Management Console: Downloading files through the web-based console. S3 File Gateway: Allowing your on-premises applications to access S3 objects as files. Monitoring S3 Traffic : You can monitor the traffic flow to and from your S3 buckets using: Amazon CloudWatch: Collecting and analyzing metrics for S3 operations. S3 Access...

How the objects are stored in S3

Here's how objects is stored in Amazon S3: Buckets and Objects : Amazon S3 is an object storage service. You store your data in resources called buckets and objects. A bucket is a container for objects; an object is a file and any metadata that describes the file. Object Storage : When you store an object in Amazon S3, you assign a unique key to the object. This key is used to retrieve the object later. The keys can be structured to mimic a hierarchical file system, but S3 is fundamentally a key-value store. Object Size : Individual Amazon S3 objects can range from a minimum of 0 bytes to a maximum of 5 TB. For objects larger than 100 MB, it's recommended to use the multipart upload capability. Storage Classes : Amazon S3 offers different storage classes to optimize for various use cases, such as S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, S3 One Zone-IA, and S3 Glacier. Scalability and Data Types : Amazon S3 is designed to be highly scalable, with support for storing ...

Key features of Amazon S3

Here are some of the key features of Amazon S3: Storage Classes: Amazon S3 offers different storage classes to optimize for different use cases, such as S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, S3 One Zone-IA, and S3 Glacier. Storage Management: S3 provides features like Lifecycle policies, Replication, and Object Versioning to help manage your data throughout its lifecycle. Access Management and Security: S3 supports access control through IAM policies, bucket policies, and ACLs. It also provides encryption options, including server-side encryption and client-side encryption. Data Processing: You can use S3 as a data source for various AWS services like Amazon Athena, Amazon Redshift, and AWS Glue for data processing and analytics. Storage Logging and Monitoring: S3 provides access logging and metrics to help you monitor and audit your S3 usage. Analytics and Insights: S3 offers features like S3 Analytics and S3 Inventory to help you analyze and gain insights into you...

Difference b/w General-purpose and a Directory bucket in Amazon S3

  The key differences between a general-purpose S3 bucket and a directory bucket in Amazon S3 are: Purpose: General purpose S3 buckets are the standard type of S3 buckets used for storing objects. Directory buckets are a specialized type of S3 bucket designed for use with the AWS Application Discovery Service. Naming Conventions: General purpose S3 bucket names must be unique across all AWS accounts and regions. Directory bucket names must follow a specific naming convention:  <name>--azid--x-s3 , where  <name>  is a unique identifier. Inactive State: Directory buckets that have no request activity for at least 90 days can transition to an inactive state, where they become temporarily inaccessible for reads and writes. General purpose S3 buckets do not have this inactive state behaviour. Access and Permissions: Access to directory buckets is typically restricted to users with permissions to the AWS Application Discovery Service and Migration Hub. General ...

Limitations of Amazon S3

  Here are the key limitations of Amazon S3: S3 Access Grants Instance : You can create only 1 S3 Access Grants instance per AWS Region per account. S3 Access Grants Locations : You can register up to 1,000 S3 Access Grants locations per S3 Access Grants instance. Grants : You can create up to 100,000 grants per S3 Access Grants instance. Bucket Naming : The bucket name you choose must be unique across all existing bucket names in Amazon S3. Each AWS account can have up to 100 buckets at a time. Object Size : The maximum object size that can be uploaded in a single PUT operation is 5 GB. For larger objects, you should use the Multipart Upload capability. Total Object Size : The total object size can range from 0 Bytes to 5 Terabytes. Firehose Delivery to S3 : If you encounter "InternalServerError" when delivering data to an S3 bucket, it could be due to high request rates on a single partition in S3. You can optimize the S3 prefix design patterns to mitigate this issue. Data ...

What happens when an EC2 instance hibernates?

  Here are the key points about EC2 Hibernation: EC2 Hibernation allows you to pause and resume your running EC2 instances, which can help lower costs and achieve faster startup times. When an instance hibernates, it signals the operating system to perform hibernation (suspend-to-disk). The contents of instance memory (RAM) are saved to its Amazon EBS root volume. This allows the instance to resume from the exact state it was in before hibernation. When you start your instance, the Amazon EBS root volume is restored to its previous state and the RAM contents are reloaded. Previously attached data volumes are reattached and the instance retains its instance ID. We don't charge usage for a hibernated instance when it is in the stopped state, but we do charge while it is in the stopping state, unlike when you stop an instance without hibernating it. Hibernation is supported for certain EC2 instance types running specific operating systems, such as Amazon Linux, Amazon Linux 2, Ubuntu,...

EC2 instance hibernation prerequisites

The key prerequisites for EC2 instance hibernation are: The instance must be enabled for hibernation during launch. You cannot enable hibernation on an existing instance. The instance must be running one of the supported operating systems: Amazon Linux, Amazon Linux 2, Ubuntu, and Windows Server CentOS, Fedora, and Red Hat Enterprise Linux (on certain instance types) The instance must have less than 150 GB of RAM, except for Windows instances which are supported up to 16 GB of RAM. For Spot Instances, the Spot Instance request type must be  persistent  and you cannot specify a launch group. The  DeviceName  in the block device mapping must match the root device name associated with the AMI. You can use the  Get-EC2Image  command to find the root device name. If you have enabled encryption by default in the AWS Region, you can omit the  Encrypted = $true  parameter in the block device mapping.

Features of AWS Elastic Block Storage (EBS)

Here are the key features of Amazon Elastic Block Store (EBS) Data Availability and Durability : EBS volumes are designed for high availability and durability, with different volume types offering 99.8% to 99.999% durability and an annual failure rate of 0.1% to 0.001%. Data is automatically replicated across multiple servers within an Availability Zone to prevent data loss. Data Archiving : EBS Snapshots Archive provides a low-cost storage tier to archive full, point-in-time copies of EBS Snapshots for long-term retention, such as for regulatory and compliance reasons. Security Features : EBS offers seamless encryption of data volumes, boot volumes, and snapshots, eliminating the need to manage a separate key management infrastructure. You can also use tags and IAM resource-level permissions to enforce security on EBS volumes. Flexible Volume Types : EBS offers different volume types, such as SSD-backed volumes (gp2, gp3, io1, io2) and HDD-backed volumes (st1, sc1), to cater to a va...

How many EC2 instances can be launched in an AWS account

  The number of EC2 instances you can launch in an AWS account depends on the vCPU-based limits set for your account. Here are the key points: AWS uses vCPU-based limits for On-Demand EC2 instances, rather than instance-count based limits. This provides more flexibility in how you utilize your compute resources. The vCPU-based limits are set on a per-region basis for your AWS account. The actual limits can vary based on factors like your AWS account type, usage history, and support plan. To check your current vCPU-based limits, you can use the AWS Management Console, AWS CLI, or AWS SDKs. For example, you can use the  describe-account-attributes  CLI command to view your current limits. If you need to increase your vCPU-based limits, you can request a service limit increase through the AWS Support Center or by contacting AWS Support. vCPU-based limits refer to the way AWS manages the limits or quotas for the number of EC2 instances and other compute resources that can b...

Features of On-demand, Reserved, Spot and Dedicated EC2 instances

   The key features of Amazon EC2 On-Demand Instances are: Flexible and Scalable : On-Demand Instances allow you to scale your compute resources up or down as needed to meet the changing demands of your application. You can launch and terminate instances as required without any long-term commitments. Pay-as-you-go Pricing : With On-Demand Instances, you pay only for the compute capacity you use, by the second, with no upfront costs or long-term commitments. This makes them well-suited for workloads with unpredictable usage patterns or short-term requirements. No Upfront Commitment : On-Demand Instances do not require any long-term commitments or upfront payments. You can start, stop, or terminate instances as needed, without penalty. Suitable for Diverse Workloads : On-Demand Instances are suitable for a variety of use cases, including web servers, databases, virtual desktops, and other applications with short-term, unpredictable workloads or spikes in traffic. Per-Second Bill...

EC2 instance purchasing options on AWS

  The main EC2 instance purchasing options on AWS are: On-Demand Instances : These instances have no long-term commitments. You pay the standard hourly rate for the instance type and region. This option provides the most flexibility, as you can launch and terminate instances as needed. Reserved Instances : These instances require a 1-year or 3-year upfront commitment in exchange for a discounted hourly rate. There are two types of Reserved Instances: Regional Reserved Instances: Provide a capacity reservation at the regional level. Zonal Reserved Instances: Provide a capacity reservation within a specific Availability Zone. Spot Instances : These are unused EC2 capacity that is available at a steep discount compared to On-Demand prices. However, Spot Instances can be interrupted with short notice, so they are best suited for fault-tolerant, flexible workloads. Dedicated Hosts : These are physical servers dedicated for your use. They provide visibility into the underlying hardware a...

How burstable CPU performance works in T series EC2 instances

  Burstable CPU performance refers to the ability of certain Amazon EC2 instance types, such as the T-series instances, to provide a baseline level of CPU performance with the ability to temporarily burst above that baseline when needed. Here's how burstable CPU performance works: Baseline Performance : Burstable instances are designed to provide a consistent baseline level of CPU performance, which is suitable for many general-purpose workloads that don't require sustained high CPU usage. CPU Credits : Burstable instances earn "CPU credits" over time when they are idle or underutilized. These CPU credits can be used to burst above the baseline CPU performance when the workload requires it. Bursting : When the instance's CPU utilization exceeds the baseline, it can use the accumulated CPU credits to burst above the baseline for a limited period. The amount of time the instance can burst depends on the number of CPU credits it has earned. Unlimited Mode : Some burs...

Different types of General Purpose (GP) instances on AWS EC2

The different types of general purpose instances on AWS EC2 are: M-series instances : These instances provide a balance of compute, memory, and networking resources, making them suitable for a wide range of workloads, such as web servers, application servers, and small to medium databases. T-series instances : These are burstable performance instances that provide a baseline level of CPU performance with the ability to burst above the baseline when needed. They are suitable for workloads that don't require sustained high CPU performance. A-series instances : These instances use AMD processors and are designed to provide a balance of compute, memory, and networking resources at a lower cost compared to the M-series instances. Mac instances : These instances are designed to run macOS workloads, such as Xcode, iOS, iPadOS, and tvOS app development.

Can we attach AWS S3 bucket to an EC2 instance

  No, you cannot directly attach an S3 bucket to an EC2 instance. S3 is an object storage service, while EC2 instances use block storage volumes like Amazon EBS. However, you can access and interact with S3 buckets from within an EC2 instance in the following ways: Using the AWS CLI : You can install the AWS CLI on your EC2 instance and use commands like  aws s3 cp  or  aws s3 sync  to copy files to and from S3 buckets. Using the AWS SDK : You can write code within your EC2 instance to interact with S3 using one of the AWS SDKs, such as the AWS SDK for Python (Boto3) or the AWS SDK for Java. Granting IAM Permissions : You can attach an IAM role to your EC2 instance that grants the necessary permissions to access the required S3 buckets. This allows your application running on the EC2 instance to interact with S3 without needing to manage credentials.

Difference between Block storage and Object storage

The key differences between block storage and object storage on AWS are: Data Structure : Block storage, like Amazon EBS, stores data in fixed-size blocks that can be accessed and modified independently. Object storage, like Amazon S3, stores data as objects, which can be of any size or format, with associated metadata. Access Pattern : Block storage provides low-latency, high-throughput access to data from a single EC2 instance. Object storage is designed for internet-scale, highly durable, and highly available access to data. Use Cases : Block storage is suitable for applications that require persistent data storage, such as databases and file systems. Object storage is suitable for a wide range of use cases, such as backup, archiving, web hosting, and big data analytics. Scalability : Block storage is typically scaled by adding more volumes or increasing the size of existing volumes. Object storage is designed to scale automatically to handle internet-scale data volumes. Durability ...

Difference between AWS Block, Object and File storage

The main differences between block, object, and file storage on AWS are: Block Storage : Provides low-latency, high-throughput access to data from a single EC2 instance. Data is stored in fixed-size blocks, which can be accessed and modified independently. Examples include Amazon Elastic Block Store (EBS) volumes. Suitable for applications that require persistent data storage, such as databases and file systems. Object Storage : Designed for internet-scale, highly durable, and highly available object storage. Data is stored as objects, which can be of any size or format, with associated metadata. Examples include Amazon S3 (Simple Storage Service). Suitable for a wide range of use cases, such as backup, archiving, web hosting, and big data analytics. File Storage : Provides a simple, serverless, and scalable file system for AWS compute services. Data is organized into a hierarchical file system structure, with directories and files. Examples include Amazon Elastic File System (EFS) and...

AWS Storage types

  The main types of storage available on AWS are: Object Storage : Amazon S3 (Simple Storage Service) is the primary object storage service on AWS. It is designed for internet-scale, highly durable, and highly available object storage. S3 is suitable for many use cases, including backup, archiving, web hosting, and big data analytics. Block Storage : Amazon EBS (Elastic Block Store) provides block-level storage volumes that can be attached to EC2 instances. EBS is designed for low-latency, high-throughput access from a single EC2 instance. EBS volumes are suitable for workloads that require persistent data storage, such as databases, file systems, and applications. File Storage : Amazon EFS (Elastic File System) provides a simple, serverless, and scalable file system for AWS compute services. Amazon FSx for Lustre, Amazon FSx for NetApp ONTAP, and Amazon FSx for Windows File Server are other file storage services offered by AWS. These file storage services are suitable for applic...

Types of EC2 instances and comparision

  There are several types of Amazon EC2 instances, each optimized for different workloads: General Purpose Instances: Provide a balance of computing, memory, and networking resources. Suitable for a wide range of applications, such as web servers, application servers, and small to medium databases. Examples include the M, T, and A instance families. Compute Optimized Instances: Provide high-performance computing capabilities. Suitable for compute-intensive applications, such as batch processing, media transcoding, and high-performance web servers. Examples include the C and Hpc instance families. Memory Optimized Instances: Provide large amounts of memory for memory-intensive applications. Suitable for in-memory databases, distributed web-scale in-memory caches, and big data analytics. Examples include the R, X, and U instance families. Accelerated Computing Instances: Provide hardware accelerators, such as GPUs or FPGAs, for specialized workloads. Suitable for machine learning, ...

Differences between Amazon EFS and FSx

  The main differences between Amazon EFS (Elastic File System) and Amazon FSx are: Storage Technology : EFS is a fully managed NFS file system, while FSx offers different file system options like FSx for Windows File Server, FSx for NetApp ONTAP, FSx for OpenZFS, and FSx for Lustre. Target Workloads : EFS is well-suited for Linux-based applications that require a simple, scalable, and fully managed NFS file system. FSx for Windows File Server is designed for Windows-based applications that require native Windows file system compatibility and features. FSx for NetApp ONTAP, OpenZFS, and Lustre cater to specific high-performance, data-intensive workloads. Protocols and Accessibility : EFS supports the NFS protocol and can be accessed from Linux, macOS, and Windows instances. FSx file systems support various protocols like NFS, SMB, and iSCSI, depending on the specific FSx offering. Performance and Scalability : EFS provides consistently low latencies and can scale on-demand to petab...

Differences between Amazon EBS and EFS

  The main differences between Amazon EBS (Elastic Block Store) and Amazon EFS (Elastic File System) are: Storage Type : EBS provides block-level storage volumes that can be attached to EC2 instances. EFS is a file storage service that provides a file system interface, allowing multiple EC2 instances to access the same file system concurrently. Access Patterns : EBS volumes are designed for low-latency, high-throughput access from a single EC2 instance. EFS is designed for high-throughput, low-latency access from multiple EC2 instances simultaneously. Durability and Availability : EBS volumes are designed for 99.999% availability within an Availability Zone. EFS is designed for 99.99% availability across multiple Availability Zones. Scalability : EBS volumes can be scaled up or down in size and performance as needed. EFS file systems can automatically scale up to petabytes of data without disrupting applications. Use Cases : EBS is suitable for workloads that require low-latency, h...

Differences between EBS and S3

  The main differences between Amazon EBS (Elastic Block Store) and Amazon S3 (Simple Storage Service) are: Storage Type : EBS provides block-level storage volumes that can be attached to EC2 instances. S3 is an object storage service, where data is stored as objects in buckets. Access Patterns : EBS volumes are designed for low-latency, high-throughput access from a single EC2 instance. S3 is designed for internet-scale, highly durable, and highly available object storage that can be accessed from anywhere. Durability and Availability : EBS volumes are designed for 99.999% availability within an Availability Zone. S3 is designed for 99.999999999% (11 9's) durability and 99.99% availability across multiple Availability Zones. Use Cases : EBS is suitable for workloads that require low-latency, high-throughput access to data, such as databases, file systems, and applications that need to persist data. S3 is suitable for a wide range of use cases, including backup and archiving, conte...

Differences between an IAM User and Role

The main differences between an IAM User and an IAM Role are: Identity Association : An IAM User is uniquely associated with a single person or application. An IAM Role is not associated with a specific person, but can be assumed by anyone who needs it. Credentials : IAM Users have long-term credentials like a password or access keys. IAM Roles do not have long-term credentials. Instead, they provide temporary security credentials when the role is assumed. Use Cases : IAM Users are typically used for individual users or applications that require long-term access to AWS resources. IAM Roles are used for scenarios where you want to grant temporary access to AWS resources, such as when an EC2 instance needs to access other AWS services. Permissions : Both IAM Users and IAM Roles can be granted specific permissions using IAM policies. The permissions for an IAM Role are defined by the policies attached to the role, and can be assumed by any entity that has been granted permission to do so....