Write us what you want & improve the DevOpsCloud website for easy to use.

To stop spammers/bots in Telegram, we have added a captcha while joining the Telegram group, which means every new member, has to authenticate within 60 seconds while joining the group.

Home >>All Articles

Published Articles (117)

Sort by:
  • All |
  • AWS (52) | 
  • Azure (31) | 
  • DevOps (9) | 
  • FREE Udemy Courses (6) | 
  • GCP (1) | 
  • Linux (1) | 


Page 3 of 4 | Showing 61 to 90 of 117 entries
AVR posted:
3 years ago
Let's learn about SNS in AWS.

SNS(Simple notification service):
=========================
Notifications are nothing but alerts.

When auto scaling launches a new machine, we need a notification.

In Route 53, failover routing policy, when one region is down, we need notification.

We get notifications quickly with SNS.

Subscribers are nothing but the users/AWS Admins that are responsible for managing infrastructure.

We create a group(Also known as topic)
We add the email addresses of subscribers into the group
The process of adding subscribers to the group is known as a subscription.

Notifications we can receive through
1)Email with normal text
2)Email with JSON script
3)Mobile SMS
4)HTTP/HTTPS requests

We need to create Topics/Subscriptions to receive notifications.

We need to log in and confirm the notification subscription as this is mandatory.
Status- Pending confirmation gets changed to Confirmed once the email validation gets succeeded.

We use SNS mainly with Auto scaling as this is mandatory.
View replies (0)
Posted in: AWS | ID: Q58 |
August 08, 2021, 11:54 AM | 0 Replies
AVR posted:
3 years ago
Let's learn about SQS in AWS.

SQS(Simple Queue Service):
======================
SQS is a message queue used to store messages while waiting for a computer to process them.

A queue is a temporary repository for messages that are awaiting process.


The backup mechanism for the Application server is SQS.

The actual representation is as follows:

Route 53 (www.domian.com)

LB

WS1 WS2 WS3

SQS

AS1 AS2

DBS

User request www.domain.com goes to the load balancer.
Web Servers(WS1 WS2 WS3) are attached to Load balancers
From the Web Server, the request goes to the Application Server(AS1, AS2)
The actual application runs on the Application Server
Web Server is just providing web pages to the users
Finally, Application servers communicate with DB Server

SQS is a communication channel between Web Servers and Application servers

What is the advantage of SQS?
The request wouldn't get lost
.
All the requests from webservers will be stored in SQS in Queue format.

The application server will pull the requests from SQS.

SQS will control the flow of the requests

Once the request is pulled from the SQS, the request in SQS wouldn't get deleted. It will be invisible for 30sec.

Within 30sec, the Application server should process the request

Within 30sec, if the Application server processes the request, it will be deleted from SQS

Within 30sec, if the Application server is unable to process the request, it will be visible again in SQS
View replies (0)
Posted in: AWS | ID: Q57 |
August 08, 2021, 11:49 AM | 0 Replies
AVR posted:
3 years ago
Let's understand AWS RDS.

RDS stands for Relational Database Service.

RDS is used by the clients to host their databases.

Relational databases are stored in the form of rows and columns in a table format.

The below are the most popular Relational Database Types
SQL Server
Oracle
MySQL ServerPostgreSQL
Aurora
MariaDB


The advantages of RDS are (Automatic Backups, Multi-AZ feature and Read Replica)


RDS Back-ups:
---------------------
We have two types of Back-ups
1)Automated Back-ups
2)DB Snapshots which is nothing but manual backup


Automated Back-ups:
----------------------------
A)Automated backups allow us to recover our database to any point in time within a "retention period".
The retention period can be between 1 and 35 days
.
B)Automated Back-ups will take a complete daily snapshot and will also store transaction logs throughout the days

When we make a recovery, AWS would 1st choose the most recent daily backup and then apply transaction logs relevant to that day.

Automated Backups are enabled by default.

When we delete the original RDS Instance, automatic backups also get deleted automatically, which is a drawback.

DB Snapshots:(This is a manual process)
--------------------------------------------------------
DB Snapshots are done manually. (These are user-initiated).

They are stored even after we delete the original RDS instance, unlike automated backups.

In this, when we delete the original RDS Instance, we can still have DB Snapshots.

DBA's use DB Snapshots whenever they apply patches to ensure that they do have a working backup of DB

Restoring Back-ups:
---------------------------
Whenever we restore either an Automatic Backup or a manual Snapshot, the restored version of the database will be a new RDS instance with a new DNS endpoint.



Multi-AZ:(Availability Zone):
------------------------------------
Multi-AZ allows us to have an exact copy of our production database in another AZ.

AWS handles the replication

So whenever the PROD Database is written to, this write will automatically be synchronised to the stand by the database.

In the event of planned database maintenance, DB Instance failure or an AZ failure, Amazon RDS will automatically
failover to the standby so that database operations can resume quickly without administrative intervention.

Both DB Servers should have the same DNS Endpoints.


Read Replica:
-------------------

We use this to have better performance when multiple users are reading/archiving data from the Database)

The replica is nothing but a duplicate.

Read replicas allow us to have a read-only copy of our PROD Database.

This is achieved by using Asynchronous replication from the primary RDS instance to the read replica.

We use read replicas primarily for very read-heavy database workloads.

We can have up to 5 RR copies of any database.

We can have read replicas of read replicas.

Each Read Replica will have its own DNS endpoint.

For Read Operations - We use the Select command.
For Write operations - We use Insert/Update/Delete commands.
View replies (0)
Posted in: AWS | ID: Q56 |
August 06, 2021, 10:48 AM | 0 Replies
AVR posted:
3 years ago
Let's understand Route 53 in AWS.

Route 53 is nothing but Domain Name System.

53 is nothing but DNS Port Number

Route 53 is a highly reliable and cost-effective way to route end users to Internet applications by translating names.

Route 53 is responsible for converting IP to Name and Name to IP.


Advantages of Route53:
==================
1)DNS is used to convert a human-friendly domain name into an IP(Internet Protocol) address and vice versa

Computers use IP Addresses to identify each other on the network.

We have two types of IP's(IPv4 & IPv6)

2)Route53 helps from regional failures.

If one region fails, the end-users have access to another region.

Traffic gets diverted to the standby region, also known as Disaster Recovery.




Route53 Routing policies:
====================
Simple
Weighted
Latency
Failover
Geolocation




Simple Routing Policy:
------------------------------
This is the default routing policy.

This is most commonly used when a single region performs a given function for our domain.


Weighted Routing Policy:
---------------------------------
Weighted Routing Policies let you split your traffic based on different weights assigned.

Example1:
10% of traffic going to US-EAST-1
90% of traffic going to EU-WEST-1

Example2:
20% of traffic going to US-EAST-1
80% of traffic going to EU-WEST-1


Latency Routing Policy:
---------------------------
Latency based routing allows you to route your traffic based on the lowest network latency for your end-user (i.e. which region will give them the fastest response time)
Latency refers to delay

Assuming the below regions as an example,
100ms to US-EAST-1
300ms to US-WEST-1
The request always goes to the lowest network latency, i.e. US-EAST-1


Failover Routing Policy:
--------------------------
Failover Routing policies are used when we want to create an active/passive set-up.

Example:
The primary site is in US-EAST-1, and the secondary DR site is in US-WEST-1
Route53 will monitor the health of the primary site using health checks
Always the user request goes to the active region




Geolocation Routing Policy:
------------------------------
Geolocation routing lets you choose where your traffic will be sent based on the geographic location of your users.
EU customers request goes to EU-WEST-1
US customers request goes to US-EAST-1
View replies (0)
Posted in: AWS | ID: Q55 |
August 05, 2021, 02:12 AM | 0 Replies
AVR posted:
3 years ago
Let's learn how to create NACL(Network Access control list)?

Create NACL
Name - Specify the name correctly
VPC - Select VPC correctly
Create
Now we need to attach this to the public subnet
Select NACL - Actions - Edit subnet associations - Select webSN assuming that this is Web Subnet where Web Server EC2 Instance is created.
SAVE

Now go to browser - PublicIP of WebServer EC2 Instance
The page doesn't load as expected because NACL blocks all the incoming connections
We need to open ports at the NACL
We need to open HTTP 80 port as we're accessing Web Server EC2 Instance from the browser.
We also need to open ssh 21
Go to NACL - Select NACL - Go to inbound rules
Add new rule
Add SSH & HTTP
SAVE

Now go to browser - PublicIP of WebServer EC2 Instance
The page doesn't load as expected

We need to understand stateful and stateless in nature
We need to open outbound ports explicitly
NACL is stateless

Select security group - webserver - We have both Inbound/Outbound
Go to Outbound - Edit
Add SSH & HTTP
SAVE

Now go to browser - PublicIP of WebServer EC2 Instance
The page doesn't load as expected

Even though the ports opened at Outbound, the web page doesn't load as expected.
Now we need to understand Ephemeral ports
Ephemeral means temporary
The total range of Ephemeral ports are 1024 to 65535

Go to inbound - Edit
Add Ephemeral ports are 1024 to 65535

Go to Outbound - Edit
Add Ephemeral ports are 1024 to 65535

For NACL, we need to apply rules at both Inbound & Outbound explicitly.

Now go to browser - PublicIP of WebServer EC2 Instance
The page should load as expected



We also need to understand what is stateful & stateless.
When we open an inbound port in the security group, the outbound port is open to all by default.
This status is called stateful.

For NACL, we need to open outbound port explicitly
NACL is stateless in nature
View replies (0)
Posted in: AWS | ID: Q54 |
August 04, 2021, 10:56 AM | 0 Replies
AVR posted:
3 years ago
Let's learn the difference between Security group & NACL(Network Access control list)

The security group will provide security at the Instance level.

The security group is stateful in nature

NACL(Network Access control list) would provide security at the subnet level.

NACL is stateless in nature

NACL would provide one more layer of security at the subnet level.

As a part of NACL, Opening Ephemeral ports are mandatory otherwise NACL would block all ports

For Ephemeral ports - We need to open at both NACL inbound & outbound
View replies (0)
Posted in: AWS | ID: Q53 |
August 04, 2021, 10:32 AM | 0 Replies
AVR posted:
3 years ago
Let's learn how to create VPC, Subnets, Internet Gateway & Route table.

How to create VPC?
Name-myvpc
IPv4-10.0.0.0/16
Create

Assume that we have two subnets. One is webSN & the other one is dbSN

Web servers should be in public subnet & DB servers in a private subnet as per the standards.

How to create a subnet?
Select VPC where we want to create a subnet
Subnet name - webSN
IPv4 - 10.0.1.0/24
Availability zone - 1a
Create


How to create a subnet?
Select VPC where we want to create a subnet
Subnet name - dbSN
IPv4 - 10.0.2.0/24
Availability zone - 1b
Create


By default, every subnet is private.
If we create VPC with a private subnet, then there is no connectivity to the outside world.
WebServer should be available to the public.
DBServer shouldn't be accessed by the public & DBServer MUST have high security.


How to make subnet public?
This is a two-step process.
Step1:
We need to enable public IP at the subnet level
Select subnet - actions - modify auto-assign IP settings
Enable auto-assign IPv4
Step2:
Create IGW & attach it to VPC
Create Internet gateway - IGW
By default, the IGW is detattached, and we need to attach it to VPC
Attach internet gateway
The purpose of the IGW is to provide internet connectivity to the subnet
one IGW can be attached to one VPC only
IGW cannot be connected to webSN directly
We need one more component, the Route table, which is present between IGW and webSN.
Hence we need a Route table
The route table is in between IGW & Subnet
One end of the RT is connected to IGW & the Other end of the RT is connected to webSN.
Step3:
How to create a Route table?
Name - RT
VPC - myvpc
Create
Once the Route table is created
One end of the Route table, we need to attach to webSN
Edit subnet associated - select webSN
Another end of the Route table, we need to attach to IGW
Edit Routes
0.0.0.0/0 select IGW - Worldwide should access webSN as this is a web server
Save
Now we can confirm the subnet is public

How to prove that subnet is public?
Create a webEC2 Machine inside the subnet(webSN) and see if we can connect
Public subnets should be available to the public
Use bootstrap script

#!/bin/bash
sudo su
yum update -y
yum install httpd -y
cd /var/www/html
echo "Connecting to webSN" > index.html
service httpd start
chkconfig httpd on

Open ports SSH & HTTP
The EC2 gets created with public IP address
Go to browser - Public IP
We should get a Web Server that confirms that it is working as expected.



How to prove that subnet is private?
Create dbEC2 Machine inside the subnet(dbSN) and see if we can connect
the port should be opened across the subnet
Because webSN should communicate with dbSN
MySQL/Aurora port should be opened for the entire webSN
Because all web servers from webSN should communicate with dbSN
MySQL/Aurora 3306 10.0.1.0/24(This is Subnet for WebServer)
create
The EC2 gets created without public IP address because we're using dbSN & also we're not enabling auto-assign public IPv4 addresses to dbSN
The EC2 gets created only with a private IP address
We should be careful while creating dbEC2 Machine, and we need to make sure that we're creating at appropriate VPC & dbSN
View replies (0)
Posted in: AWS | ID: Q52 |
July 29, 2021, 10:40 AM | 0 Replies
AVR posted:
3 years ago
Let's learn how to create a subnet in VPC?

Just to recap of creating VPC.
Name - myVPC
IPv4 CIDR block - 10.0.0.0/16 {Technically 10.0.0.0. is a private ip address}
Create VPC

Now let's create two subnets in the above VPC

Create subnet
Select VPC where we want to create the subnet
Subnet name - webSN
IPv4 - 10.0.1.0/24
Availability zone - 1a
Create


Create subnet
Select VPC where we want to create the subnet
Subnet name - dbSN
IPv4 - 10.0.2.0/24
Availability zone - 1b
Create



By default, every subnet is private.
We need to make a private subnet to a public subnet, and this is a two-step process.



Step1:
We need to enable public IP
Select subnet
Enable auto-assign IPv4

Step2:
Create IGW & attach it to VPC
Create Internet gateway - IGW
By default, the IGW is detattached, and we need to attach it to VPC
Attach internet gateway
The purpose of the IGW is to provide internet connectivity to the subnet
IGW cannot be connected to the subnet directly
Hence we need a Route table
The route table is in between IGW & Subnet
View replies (0)
Posted in: AWS | ID: Q51 |
July 28, 2021, 10:06 AM | 0 Replies
AVR posted:
3 years ago
Let's learn about subnets in AWS.

What is a subnet?
A subnet is a partition that is created inside the VPC
We shouldn't have everything in one single subnet as a part of security.
It is always recommended to have more than one subnet.

Example:
The client has 1000 Web Servers, 1000 Application Servers & 1000 Database Servers.
Web Servers(1000 in total) - Create one subnet partition for 1000 Web Servers & place them in that subnet
Application Servers(1000 in total) - Create one subnet partition for 1000 Application Servers & place them in that subnet
Database Servers(1000 in total) - Create one subnet partition for 1000 Database Servers & place them in that subnet


By default, every subnet is private.
If we create VPC with a private subnet, then there is no connectivity to the outside world.
We need to make private subnet to public subnet
The public should access only Web Servers
The public shouldn't access DB servers
Also, DB servers MUST have high security



The first four IP addresses and the last IP address in each subnet CIDR block are unavailable for us to use and cannot be assigned to an instance.

For example, in a subnet with CIDR block 10.0.0.0/24, the following five IP addresses are reserved:

10.0.0.0: Network address.

10.0.0.1: Reserved by AWS

10.0.0.2: Reserved by AWS

10.0.0.3: Reserved by AWS

10.0.0.255: Network broadcast address.
View replies (0)
Posted in: AWS | ID: Q50 |
July 28, 2021, 09:50 AM | 0 Replies
AVR posted:
3 years ago
Let's learn about VPC in AWS.

VPC stands for Virtual Private Cloud

VPC is a virtual data centre in the Cloud

VPC is nothing but creating a partition in AWS Data Center

Whenever we create/launch EC2 Instance, make sure that we're creating that in our VPC



How to create VPC in AWS?
======================
We need to create VPC(myvpc) in AWS Cloud.

10.0.0.0/16(16 is nothing but subnet mask), and the subnet mask can max 32

The format is like IP ADDRESS/Subnet mask.

Name - myvpc

IPv4 CIDR block - 10.0.0.0/16 {Technically 10.0.0.0. is a private ip address}

Create VPC

Creating VPC is as simple as that.
View replies (0)
Posted in: AWS | ID: Q49 |
July 28, 2021, 07:41 AM | 0 Replies
AVR posted:
3 years ago
Let's learn how to create an IAM custom role & assign that to EC2 Instance.

The role is a replacement of credentials.

In simple terminology, a role would have two ends like source and destination where the source is EC2 Instance & destination is AmazonS3FullAccess OR AmazonEC2FullAccess.

Generally, Roles are assigned to EC2 Instances.

How to attach a role to EC2 Instance?
We can attach in step3 while creating EC2 Instance
OR
For existing EC2 Instances, Select EC2 Machine - Actions - Modify IAM Role - Select custom role.


NOTE:
If the role assigned is AmazonS3FullAccess, we can create S3bucket only from the EC2 terminal.
If the role assigned was IAMFullAccess, we can create anything related to IAM from the EC2 terminal.

We don't need to configure anything.
We can start using all AWS CLI commands from the terminal based on the role assigned.

Connect to EC2 Instance via Putty
Execute AWS CLI Commands as per the given role


Below are few examples:
===================

From the CLI, how to see all the list of buckets that were created already
aws s3 ls (list s3 objects)

Every command starts with aws followed by service name.

How to create a bucket via CLI?
aws s3 mb s3://mybucket

How to upload an object into a bucket via CLI?
aws s3 cp test.txt s3://mybucket/test.txt

How to create an IAM user via CLI?
aws iam create-user --user-name john

How to create an IAM group via CLI?
aws iam create-group --group-name mygroup


AWS CLI Command Reference - https://docs.aws.amazon.com/cli/latest/reference/
View replies (0)
Posted in: AWS | ID: Q48 |
July 28, 2021, 07:29 AM | 0 Replies
AVR posted:
3 years ago
Let's learn how to use AWS CLI in AWS IAM.


To work with AWS CLI Access, we need Access Key ID & Secret Access Key.

Go to IAM Dashboard, where we can see an option to generate the access key.

We need to install the AWS CLI tool for the Windows Operating system
https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-windows.html

Once AWS CLI is installed successfully on Windows,

We can follow the below steps

Open CMD prompt & execute the below command.

aws configure
It asks for Access key ID & Secret Access key
It also asks for Default region name: type your region-name correctly
Default output format: text
Now we're connected to AWS Account


From the CLI, how to see all the list of buckets that were created already
aws s3 ls (list s3 objects)

Every command starts with aws followed by service name.

How to create a bucket via CLI?
aws s3 mb s3://mybucket

How to upload an object into a bucket via CLI?
aws s3 cp test.txt s3://mybucket/test.txt

How to create an IAM user via CLI?
aws iam create-user --user-name john

How to create an IAM group via CLI?
aws iam create-group --group-name mygroup


AWS CLI Command Reference - https://docs.aws.amazon.com/cli/latest/reference/


NOTE:
When we have AWS Console Access, why do we need CLI Access?
Considering Operational team convenience, AWS gives both console access and CLI Interface.
View replies (0)
Posted in: AWS | ID: Q47 |
July 26, 2021, 01:03 PM | 0 Replies
AVR posted:
3 years ago
Let's learn how to create User groups in AWS IAM.

1st of all, IAM stands for Identity and Access Management.

IAM is a part of Security, Identity & Compliance.

Go to IAM Dashboards - Click on User groups to see the option for creating a group.
On the User groups dashboard, we can see group name, Users, Permissions & Creation time.

Click on create group
Specify User group name
Attach permissions policies - Optional
You can attach up to 10 policies to this user group. All the users in this group will have permissions that are defined in the selected policies.

Create group
That's it.



Example:

Name of the group - EC2Group
Attach permissions policies - Select AmazonEC2FullAccess
Create group

Name of the group - EC2Group
Attach permissions policies - Select AmazonS3FullAccess
Create group


Whenever new people join the company, we can add them directly to the groups if the groups are in place with policies.
Instead of assigning something manually to each user, we can promptly use the groups.
View replies (0)
Posted in: AWS | ID: Q46 |
July 25, 2021, 09:23 AM | 0 Replies
AVR posted:
3 years ago
Let's learn how to create users in AWS IAM.

1st of all, IAM stands for Identity and Access Management.

IAM is a part of Security, Identity & Compliance.

Go to IAM Dashboards - Click on Users to see the option Add users
We need to specify the below details
user name -
Access type - (Programmatic access/AWS Management console access) - We can choose our options based on the requirement
Console password -
Add user to a group(if applicable) / copy permissions from existing user(if applicable) / Attach existing policies directly(if applicable)

The below are a few examples of existing policies:
1)AdministratorAccess
2)AmazonEC2FullAccess
3)AmazonS3FullAccess


Add tags(this is optional but good to use)
Review
Create user



Important points to remember:
i)Console access is nothing but logging in with Email and Password
ii)CLI stands for Common Line Interface is nothing but Programmatic Access
When a user gets AdministratorAccess, the user can create IAM Users
View replies (0)
Posted in: AWS | ID: Q45 |
July 25, 2021, 08:50 AM | 0 Replies
Vikas posted:
3 years ago
I want to learn devops with AWS or gcp
Or azure may be
View replies (0)
Posted in: DevOps | ID: Q44 |
July 25, 2021, 07:27 AM | 0 Replies
AVR posted:
3 years ago
Are you an Azure Engineer with Databricks experience?

We're looking for only working professionals with Databricks hands on.

If interested, please answer the below questions as a reply.

Name:
Email:
Phone:
IT Experience:
Azure Experience:
Databricks Experience:
Current CTC:
Expected CTC:
Availability to join:
View replies (0)
Posted in: | ID: Q43 |
July 24, 2021, 04:48 PM | 0 Replies
AVR posted:
3 years ago
Let's learn about AWS Identity and Access Management (IAM), which securely manages AWS services and resources.

IAM is related to administration.

AWS IAM is a service that helps you securely control access to AWS resources.

We use IAM to control the users with the necessary permissions to access AWS Services/Resources
..
When we create AWS Account, we get complete access to all AWS Resources as we get Root user access by default.
Every company would have only one AWS root account
The owner of the account can create user accounts with limited privileges
Examples:
User A should have EC2 Full Access
User B should have S3 Full Access
User C should have EC2 read-only access


IAM allows to manage users, groups and their level of access to the AWS Services

Advantages of IAM:
===============
Centralised account of AWS Account
Shared access to AWS Account
Granular permissions
Identify federation(Users can login using LinkedIn, Facebook etc)
Multifactory Authentication(Password & OTP)
Setup password rotation policy(Expires every 30 days)


Important terms:
=============
users - end-users (people)
groups - a collection of users under one set of permissions
policies - set of permissions
roles - we can create roles for the users to make use of AWS Resources


Whenever a consultant/engineer joins the company, the user gets IAM user access, not root user access.

Root user credentials cannot be shared so easily in any organization.

We have 2 types of users in AWS (Root user & IAM user)
IAM users get limited permissions from the AWS Administrator
View replies (0)
Posted in: AWS | ID: Q42 |
July 23, 2021, 11:40 AM | 0 Replies
AVR posted:
3 years ago
Let's learn about Life cycle management which is one of the AWS S3 features.

Life cycle management:
==================
Go to the bucket
Click on Management
Go to Lifecycle rules - Create lifecycle rule
Specify Lifecycle rule name
Choose a rule scope -
Acknowledge the settings
Lifecycle rule actions -

Example:
Storage class transitions Days after object creation
Standard-1A 30(After 30 days the object/objects moved to Standard-1A)
Glacier 90(After 90 days the object/objects moved to Glacier)

Create rule

Once the rule is created, we can see the timeline summary

This saves the bill for the client.
View replies (0)
Posted in: AWS | ID: Q41 |
July 23, 2021, 11:13 AM | 0 Replies
AVR posted:
3 years ago
Let's learn about the Bucket policy, which is one of the AWS S3 features.

Bucket policy: (Applicable only at Bucket level)
==============================
Go to the bucket - Select Permissions tab
We can see Bucket policy in (JSON)
Only AWS Administrators are allowed to write Bucket policies.

The purpose of the ACL & Bucket policy is the same in AWS.

ACL - We can apply at the Bucket level & also at the Object level

Bucket policy - We can only apply at the Bucket level.
View replies (0)
Posted in: AWS | ID: Q40 |
July 23, 2021, 10:59 AM | 0 Replies
AVR posted:
3 years ago
Let's learn about the Access control list, which is one of the AWS S3 features.

ACL(Access control list):
===================
Using ACL, we can control bucket & also objects.
ACL is there at the bucket level & also at the object level.


What is canonical id?
Canonical id is nothing but AWS Account Number where we can see next to the account.



Where do we see the ACL option?
Go to the bucket - Select Permissions tab
We can see the ACL option where we can edit
Click on Edit
Click on Add grantee
Grantee - This is canonical id
Object permissions (Select List/Write checkboxes) & Bucker ACL (Select Read/Write checkboxes)
Save changes


We can also apply ACL at the object level
Go to the object
Click on Permissions
Click on Edit
Click on Add grantee
Grantee - This is canonical id
Object permissions (Select List/Write checkboxes) & Object ACL (Select Read/Write checkboxes)
Save changes
View replies (0)
Posted in: AWS | ID: Q39 |
July 23, 2021, 10:45 AM | 0 Replies
AVR posted:
3 years ago
Let's learn about Encryption which is one of the AWS S3 features.

Encryption: (Data gets encrypted and saved into the bucket)
=========
We use Encryption for very sensitive objects in the S3 bucket.

Why the customers should use AWS when compared to On-Prem
The most serious concern from the customers is security.

There are 2 types of encryption
1)AES - 256 (Advanced Encryption Standard) - Single Encryption
2)AWS - KMS (Key Management Service) - Double Encryption (More secured)


How do we enable Encryption?
Select the bucket - Properties - Default Encryption - Edit - Enable
View replies (0)
Posted in: AWS | ID: Q38 |
July 23, 2021, 10:41 AM | 0 Replies
AVR posted:
3 years ago
Let's learn about Transfer Acceleration which is one of the AWS S3 features.

Transfer Acceleration:
==============
S3 TA enables fast, easy & secure transfer of files over long distances between end-users and the s3 bucket.

As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path.

When we enable TA, data will be transferred to the edge location 1st, and then from the edge location, data will be transferred to the bucket.

How do we enable TA?
Select source bucket - properties - TA - Edit - Enable - Save changes

NOTE:
User uploads to edge location with user speed
Edge location to Bucket - AWS uses the high-speed upload
View replies (0)
Posted in: AWS | ID: Q37 |
July 23, 2021, 10:36 AM | 0 Replies
AVR posted:
3 years ago
Let's learn about Cross-region replication(CRR) which is one of the AWS S3 features.

Why do we need CRR? To avoid network traffic, usually, companies implement CRR.
Example: OTT Platforms like Netflix & Amazon prime movies

Replication is nothing but a duplication.
When we upload objects in one region, they should be available in another region automatically.

The pre-requisite is we need to enable bucket versioning in both the regions for Cross-region replication(CRR)

We need to enable cross-region replication at the source side(1st region)

Go to source bucket - Management - Replication Rules - Create Replication Rule.
Specify Replication Rule Name
Specify Destination bucket where we need the replication
IAM Role - Create a new role (To establish a connection between two regions, we need to have IAM Role in place)
Save

Now the Cross-region replication(CRR) should work as expected.

Upload an object in the source bucket and see if that is coming to the destination bucket automatically(known as replication)
View replies (0)
Posted in: AWS | ID: Q36 |
July 21, 2021, 10:11 AM | 0 Replies
AVR posted:
3 years ago
What do you know about Amazon S3 Storage Classes?

Amazon S3 offers a range of storage classes designed for different use cases.

These include
S3 Standard for general-purpose storage of frequently accessed data;
S3 Intelligent-Tiering for data with unknown or changing access patterns;
S3 Standard-Infrequent Access (S3 Standard-IA) and
S3 One Zone-Infrequent Access (S3 One Zone-IA) for long-lived, but less frequently accessed data; and
Amazon S3 Glacier (S3 Glacier) and
Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive) for long-term archive and digital preservation.

If you have data residency requirements that can’t be met by an existing AWS Region, you can use the S3 Outposts storage class to store your S3 data on-premises.

Amazon S3 also offers capabilities to manage your data throughout its lifecycle.

Once an S3 Lifecycle policy is set, your data will automatically transfer to a different storage class without any changes to your application.
View replies (0)
Posted in: AWS | ID: Q35 |
July 20, 2021, 10:38 PM | 0 Replies
AVR posted:
3 years ago
Let's understand how we can use static website hosting in AWS.

By default, static website hosting is disabled in AWS & we need to enable this by going to bucket properties.

We can see website endpoint at the bucket's properties is nothing but the URL where we can access the website files via the browser.

To link a website name with a website endpoint, we use Route53.
View replies (0)
Posted in: AWS | ID: Q34 |
July 20, 2021, 06:58 PM | 0 Replies
AVR posted:
3 years ago
Let's learn about AWS S3 Versioning.

AWS S3 Versioning has two advantages
i)We can recover deleted objects easily
ii)We can maintain different versions of the object

By default, this feature is disabled, so we need to enable versioning 1st to use it.
Go to Bucket Versioning - Enable and Save the changes.

When we enable versioning, the bucket should maintain the current version & also the previous versions of the objects.

When versioning is enabled, technically, the object is not deleted; it is only marked for deletion.
Click on the Show versions radio button.
Remove the mark & we can see the deleted object automatically in the bucket.
View replies (0)
Posted in: AWS | ID: Q33 |
July 20, 2021, 11:07 AM | 0 Replies
AVR posted:
3 years ago
Let's learn something about AWS S3 Features:

The below are the most important S3 features used by most of the companies

Versioning
Static website hosting
Classes/Tiers
Cross-region replication
Transfer Acceleration
Encryption
Tags
Metadata
ACL
Bucket policies
Life cycle management
View replies (0)
Posted in: AWS | ID: Q32 |
July 20, 2021, 10:32 AM | 0 Replies
AVR posted:
3 years ago
Let's understand about AWS S3
S3 stands for Simple Storage Service
S3 is a storage service, and this is paid service at the Enterprise level.

What type of storage is S3?
It is object storage, obviously.
S3 is a secure, durable and highly scalable object storage.
S3 is easy to use, with a simple web service interface to store & retrieve any amount of data from anywhere on the web.
We need to upload files only in a unique bucket(Create a bucket & upload the objects in that bucket)



EC2 - root drive comes with 8GB storage
Why do you need s3, and how is this different from EC2 storage?
s3 is pure object storage
EC2 root drive is not object storage




Features of S3:
==========
Built for 99.99% availability for the S3 platform
Amazon Guarantee 99.999999999% durability
Tiered Storage Available
Lifecycle Management
Versioning
Encryption
Secure your data using Access control lists & Bucket policy



S3 Storage Classes/Tiers:
=================
S3 Standard
S3 Intelligent Tiering
S3 Standard IA
S3 One Zone-IA(Infrequently Access)
S3 Glacier (To get data, need to wait for 2-5 hours)
S3 Glacier Deep Archive (To get data, need to wait for 12 hours)




How to create s3 bucket? (Any bucket we create based on the region is global)
=================
Storage-S3 is Global, not regional.
S3 Dashboard is Global & EC2 Dashboard is Regional
Step 1: Create Bucket
Step 2: Upload objects
Click on Create bucket
Bucket name - (Name MUST be unique)
AWS Region - Select your region
Block Public Access settings for this bucket - By default, Block all public access is enabled.
We need to acknowledge the settings
Create Bucket
Once the Bucket is created,
Go inside the Bucket, Choose Upload option
Select the objects we need to upload
Go to Permissions - Last option is Predefined ACL's - Select Grant public-read access.
Click on Upload
Click on the Object & look for the Object URL
Since this is public-read access, anyone can access the object via a browser.

How to change the permissions?
Go to permissions-Edit
We can remove the public access(Unselect the tick box)
Save changes
When there is no public access, we can see AccessDenied error message
We can control the objects via permissions.

Deletion of bucket steps:
---------------------------
1st we need to delete the objects inside the bucket(Also known as Empty the bucket)
2nd we need to delete the bucket when there are no objects internally(Only when the bucket is empty)



Bucket naming rules:
-----------------------
The following rules apply for naming buckets in Amazon S3
Bucket names must be between 3 and 63 characters long
Bucket names can consist only of lowercase letters, numbers, dots (.), and hyphens (-)
Bucket names must begin and end with a letter or number
Bucket names must not be formatted as an IP address (for example, 192.168.5.4)
Bucket names must be unique within a partition
View replies (0)
Posted in: AWS | ID: Q31 |
July 17, 2021, 06:06 AM | 0 Replies
AVR posted:
3 years ago
Let's understand more about volumes in AWS.

Scenario1: (Same Region)
Let's say that we have two EC2 Instances. 1st Instance is with Root and EBS volumes, and 2nd Instance is with only Root volume
Now my requirement is I would like to detach the EBS volume from 1st Instance and attach it to 2nd Instance.
Detach from 1st one and attach to 2nd one, as simple as that but make sure that they both are in the same subnet

A subnet is nothing but a partition in AWS Region.

Scenario2: (Multiple Regions)
Let's say that I have one EC2 Instance with Root & EBS volumes in one region, and I also have another EC2 Instance in another region with only Root volume.
Now the requirement is I would like to have a copy of the EBS volume from one region to another region.
How can we do this?
Volume (Region1) has to be converted to Snapshot(Region1)
Snapshot(Region1) has to be copied to Snapshot(Region2)
Snapshot(Region2) has to be converted to Volume(Region2)
Volume(Region2) has to be attached to EC2 Instance(Region2)
Both Volume & EC2 Instance MUST be in the same availability zone/subnet
View replies (0)
Posted in: AWS | ID: Q30 |
July 16, 2021, 10:11 AM | 0 Replies
AVR posted:
3 years ago
Should we go with AMI or Snapshots in AWS?

root volumes - we go with AMI
EBS volumes - we go with Snapshots
View replies (0)
Posted in: AWS | ID: Q29 |
July 16, 2021, 09:32 AM | 0 Replies
Page 3 of 4 | Showing 61 to 90 of 117 entries